h2.db file size difference - h2

I have an application that generates an H2 database.
When I execute the application on Windows XP it generates an .h2.db file with size 176K, but when I execute the same application on Unix (SunOS) it generates an .h2.db file with size 1126K, although they contain exactly the same data.
Can anyone explain what might be causing the UNIX generated file to be so much larger?
Thanks!
Martin

The easiest way to shrink the database file in this case is to open and close it. An alternative is to run the statement shutdown compact
In your case, the "Unix" database is not fully compacted, that means it contains empty pages in the database file (the empty pages where most likely temporarily contained the transaction log; this is normal). When closing the database, H2 will try to compact the database file by moving unused pages to the end of the file and then truncating the file. The default compact time is 0.2 seconds. Probably this 0.2 seconds were not quiet enough to fully compact the database in case of the "Unix" platform, but enough for the "Windows" platform.

Related

How to get a Win32 program to update the file size while still writing files

I have a Win32 program that keeps a file open and writes data to it over a period of several hours. I'd like for the file size, as shown in an Explorer window, to be updated every so often.
As an example, when a browser is downloading a large file, you can see the file size change over time, even though the file is still downloading.
With my current naive implementation, the file size remains zero until I close the file.
How do I do this in Win32? Currently the file is open using std::ofstream. Is this a proper application of std::ostream::flush() ? Or do I need to close and reopen the file with some regularity?
std::ostream::flush() makes sure you have your data safe on disk. Flushing the buffer is a valid use case in situations where the automatic flushes ain't good enough for you (e.g. there's too little data written over too long periods, the data is written constantly but needs to be accessible constantly too, you need to be sure the data gets logged in case of crash or power down etc.); yet, on some OS/filesystem combinations (see Why is the file size reported incorrectly for files that are still being written to?), that still won't update the file size accordingly. On Win32, you usually won't see size updates before actually closing/reopening the handle; sometimes re-reading the dir etc. will help, and sometimes it simply won't.
As such, you can use e.g. ReOpenFile to force that update, or simply use close/open instead of flushing. The exact solution depends whether you need the updated filesize so direly and the reduced output rate is not a real problem (in which case reopening is the best option), or if you can live with a wrong size reported (in which case flushes are your best option IMO).

Move or copy and truncate a file that is in use

I want to be able to (programmatically) move (or copy and truncate) a file that is constantly in use and being written to. This would cause the file being written to would never be too big.
Is this possible? Either Windows or Linux is fine.
To be specific what I'm trying to do is log video with FFMPEG and create hour long videos.
It is possible in both Windows and Linux, but it would take cooperation between the applications involved. If the application that is writing the new data to the file is not aware of what the other application is doing, it probably would not work (well ... there is some possibility ... back to that in a moment).
In general, to get this to work, you would have to open the file shared. For example, if using the Windows API CreateFile, both applications would likely need to specify FILE_SHARE_READ and FILE_SHARE_WRITE. This would allow both (multiple) applications to read and write the file "concurrently".
Beyond sharing the file, though, it would also be necessary to coordinate the operations between the applications. You would need to use some kind of locking mechanism (either by locking some part of the file or some shared mutex/semaphore). Note that if you use file locking, you could lock some known offset in the file to act as a "semaphore" (it can even be a byte value beyond the physical end of the file). If one application were appending to the file at the same exact time that the other application were truncating it, then it would lead to unpredictable results.
Back to the comment about both applications needing to be aware of each other ... It is possible that if both applications opened the file exclusively and kept retrying the operations until they succeeded, then perform the operation, then close the file, it would essentially allow them to work without "knowledge" of each other. However, that would probably not work very well and not be very efficient.
Having said all that, you might want to consider alternatives for efficiency reasons. For example, if it were possible to have the writing application write to new files periodically, it might be more efficient than having to "move" the data constantly out of one file to another. Also, if you needed to maintain some portion of the file (e.g., move out the first 100 MB to another file and then move the second 100 MB to the beginning) that could be a fairly expensive operation as well.
logrotate would be a good option is linux, comes stock on just about any distro. I'm sure there's a similar windows service out there somewhere

In what situation should I use ASCII to transfer a file over FTP? (I'm not asking the diff between ascii xfer and bin xfer)

I understand the difference between ASCII mode vs binary when it comes to FTP, but what I don't understand is why there is even a need for ASCII mode at all? Is this just a legacy thing that used to save time by eliminating the most significant bit, therefore causing the overall speed of the transfer to increase by 1/8th? Or is there some hidden use for it that I don't know about?
I've encountered many problems because I would forget to switch the mode to bin when transferring text between different OS's. I don't understand why "bin" isn't just the default for everything, especially with today's much faster internet speeds.
Knowwutimean, Vern?
ASCII mode exists so you can get the right answer when you upload a text file to a remote system without having to know what the line termination or character set conventions are for that system. It was more important when transferring text files was more often done via FTP than, say, email.
To address your practical problem: check the documentation for both your FTP client and server(s) to see if there's a way to set ASCII mode by default. Often this is as simple as some kind of "profile" that sends some FTP commands every time you connect.
To address your philosophical problem: FTP is a 40 year old protocol that has its fair share of historical baggage. One day you'll be very glad that some protocol you depend on was standardized long ago and you can still access some old data.
I, for one, vote to eliminate ascii mode from ftp servers. Any EOL translation can be done by applications consuming the files, and many apps today understand both EOL types anyway. At a minimum, I'd like to see servers switch to using binary by default, and only use ascii if requested.
One scenario of practical use of ASCII mode is to upload PHP or Perl or similar scripts from Windows development machine to Unix server. Use of Binary mode would require separate conversion of line ending sequences, while with ASCII mode conversion is performed "automatically".
Update: there's one more scenario that we have come across - when transferring data to/from mainframes that use EBCDIC encoding, ASCII mode tells the server to perform conversion between encodings.
Here's a practical example of a problem that comes from using a binary FTP connection. In php there are two types of comments:
// a single line comment like this
/* a block comment like this */
The block comment has a start and an end. But the single line comment just ends at the end of the line.
If you upload a php file with single line comments using a binary connection, the php will stop running as soon as it hits the single line comment. It doesn't recognise the end of the line as the end of the comment, so it effectively comments out the rest of your php script.
If however you use FTP in ASCII mode, it will correctly read the end of the line and will run your php code as expected.

Programmatically empty out large text file when in use by another process

I am running a batch job that has been going for many many hours, and the log file it is generating is increasing in size very fast and I am worried about disk space.
Is there any way through the command line, or otherwise, that I could hollow out that text file (set its contents back to nothing) with the utility still having a handle on the file?
I do not wish to stop the job and am only looking to free up disk space via this file.
Im on Vista, 64 bit.
Thanks for the help,
Well, it depends on how the job actually works. If it's a good little boy and it pipes it's log info out to stdout or stderr, you could redirect the output to a program that you write, which could then write the contents out to disk and manage the sizes.
If you have access to the job's code, you could essentially tell it to close the file after each write (hopefully it's an append) operation, and then you would have a timeslice in which you could actually wipe the file.
If you don't have either one, it's going to be a bit tough. If someone has an open handle to the file, there's not much you can do, IMO, without asking the developer of the application to find you a better solution, or just plain clearing out disk space.
Depends on how it is writing the log file. You can not just delete the start of the file, because the file handle has a offset of where to write next. It will still be writing at 100mb into the file even though you just deleted the first 50mb.
You could try renaming the file and hoping it just creates a new one. This is usually how rolling logs work.
You can use a rolling log class, which will wrap the regular file class but silently seek back to the beginning of the file when the file reaches a maximum designated size.
It is a very simple wrap, either write it yourself or try finding an implementation online.

Page error 0xc0000006 with VC++

I have a VS 2005 application using C++ . It basically importing a large XML of around 9 GB into the application . After running for more than 18 hrs it gave an exception 0xc0000006 In page error. THe virtual memory consumed is 2.6 GB (I have set the 3GB) flag.
Does any one have a clue as to what caused this error and what could be the solution
Instead of loading the whole file into the memory you can use SAX parsers to load only a part of the file to the memory.
9Gb seems overly large to read in. I would say that even 3Gb is too large in one go.
Is your OS 64bit?
What is the maximum pagefile size set to?
How much RAM do you have?
Were you running this in debug or release mode?
I would suggest that you try to reading the XML in smaller chunks.
Why are you trying to read in such a large file in one go?
I would imagine that your application took so long to run before failing as it started to copy the file into virtual memory, which is basically a large file on the hard disk. Thus the OS is reading the XML from the disk and writing it back onto a different area of disk.
**Edit - added text below **
Having had a quick peek at Expat XML parser it does look as if you're running into problems with stack or event handling, most likely you are adding too much to the stack.
Do you really need 3Gb of data on the stack? At a guess I would say that you are trying to process a XML database file, but I can't imagine that you have a table row that is so large.
I think that really you should use it to search for key areas and discard what is not wanted.
I know nothing other than what I have just read about Expat XML Parser but would suggest that you are not using it in the most efficient manner.

Resources