I have a small project in Go that are receiving text lines over tcp to process. However, to ensure robustness, I want to create some sort of journal so that nothing is lost in case of power failure (e.g. a frame of data is received by my app, but is not yet processed).
I have googled for any guides on how a journal file should be implemented, but the search results are heavily polluted by Oracle RDBMS documentation and such.
My tought was something like: immediately after receiving a line, write it to a file with a "not processed flag". After processing, update the file so that this flag is cleared, opening for overwrites. At the same time as this flag is cleared, send an "processed ack" to the data sender. Perhaps its easiest to deal with fixed size "slots" in the journal to ensure that I can reuse freed slots rather than having a ever-increasing file and maintain a "free list" of unused slots.
Is there any "best practice" for implementing such files in custom code, i.g.e with regards to file structure, padding and locking? Are there any concerns doing so in Go as it is cross-platform rather than using native file-system APIs?
You shouldn't rewrite a journal. Just append the operations to it so that you can recreate them, and then control the strictness level you want.
The logic should simply be:
receive message
write it to journal
optionally do an fsync on the journal now - depending on your consistency requirements.
optionally then send a "received ack" - depends on your needs.
process the message.
optionally write another "processed" record to the file with an id of the record. you don't always need that but this where you don't rewrite the old record. Alternatively you can write a separate file with the "top transaction id" you've processed, so you'll automatically know where to begin processing again in case of a failure. this will reduce the journal size.
send a "processed ack" or "processing failure" - again, depends on what you want.
Databases usually let you control the fsync behavior - every write, every N seconds, when the os decides - it's a matter of speed vs. durability.
A good read on the subject might be this post on redis persistence:
http://oldblog.antirez.com/post/redis-persistence-demystified.html
[EDIT] another great read on the subject - http://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about-real-time-datas-unifying
As for the Go aspect of it - there are a few options of writing to files, from a low level file handler to a buffered writer. Of course a file handler will keep you most in control of what's going on under the hood. I'm not sure how much caching behind the scenes a normal file writer in Go does, I'd suggest you read the code if you intend to use it.
Related
I am looking for ways to avoid the transfer of duplicate files when transferring through HTTP and SFTP. My system stores the state of the transfer each time a transfer is performed into an external cache.
Before each transfer, I look up the external cache and if there is an entry for the current file with the status SUCCESS, the file will be skipped. This works well as long as my system is able to store the status in the cache each time the transfer happens. But in cases when the transfer is done and before writing the status of the transfer, the service dies, the service has no clue about the transfer and the next time the same file comes, I will re-transfer the file.
One way to improve this is to update the cache before and after the transfer is done so that I will have some clue about the file. But is there any other way to avoid this? Because once the file is transferred to the external system, there is no way to undo it when the writing of the status fails. Any thoughts?
I routinely synchronize external data and have written enough mastering processes to speak on the subject. You are asking for logistics solutions without even mentioning the context of the data and its purpose in being delivered to another location.
Are you trying to mirror a master copy of the file to another location? If so, then you need to simply deliver the file with a unique delivery number attached, allowing the recipient to independently synchronize both data sets and handle any detected differences in the files. If you are forcibly doing this work on behalf of the recipient, you may be destroying data. I consistently recommend having the recipient pull the data themselves as needed and synchronize/master it themselves, rather than pushing it. That way these business rules are organized where they should be. Push processes are bad.
Are you trying to allow users to overwrite a master file with their own copies, asking how to coordinate their uploads so that the file isn't overwritten? If so, you need to take away their direct control to overwrite that file. You need to separately synchronize each file according to a user-defined process, because each can have its own business rules.
When you say "look up the external cache and if there is an entry for the current file with the status SUCCESS, the file will be skipped", you have given far too much responsibility to the deliverer. I say that, but how do you know? In manufacturing, no deliverer would be expected to do more than carry the load. Consumers are responsible for allocating that space. If the consumer truly needs the file, let it make the decision to order it and handle receiving it, rather than having the deliverer juggle such decisions.
I use basic_managed_mapped_file and I want to backup the file while the program is running.
How can I make sure the data is written to disk for the backup?
The answer is logically "yes".
The operating system will make sure that the data is written, I believe even if your process would crash next.
However, if you
must be sure that the data hit the disk before doing anything else
need to ensure that data hits the disk in any particular order (e.g. journaling/intent logging)
need to be sure that data is safely written in the face of e.g. power failure
then you will need to add a disk sync call on most OS-es. If you require this level of detail (and worse, in portable fashion), the topic quickly becomes hard, and I defer to
eat my data: how everybody gets file IO wrong from Stewart Smith
I've also mirrored the video/slides just in case (see here)
If one process does a write() of size (and alignment) S (e.g. 8KB), then is it possible for another process to do a read (also of size and alignment S and the same file) that sees a mix of old and new data?
The writing process adds a checksum to each data block, and I'd like to know whether I can use a reading process to verify the checksums in the background. If the reader can see a partial write, then it will falsely indicate corruption.
What standards or documents apply here? Is there a portable way to avoid problems here, preferably without introducing lots of locking?
When a function is guaranteed to complete without there being any chance of any other process/thread/anything seeing things in a half finished state, it's said to be atomic. It either has or hasn't happened, there is no part way. While I can't speak to Windows, there are very few file operations in POSIX (which is what Linux/BSD/etc attempt to stick to) that are guaranteed to be atomic. Reading and writing are not guaranteed to be atomic.
While it would be pretty unlikely for you to write 2 bytes to a file and another process only see one of those bytes written, if by dumb luck your write straddled two different pages in memory and the VM system had to do something to prepare the second page, it's possible you'd see one byte without the other in a second process. Usually if things are page aligned in your file, they will be in memory, but again you can't rely on that.
Here's a list someone made of what is atomic in POSIX, which is pretty short, and I can't vouch for it's authenticity. (I can't think of why unlink isn't listed, for example).
I'd also caution you against testing what appears to work and running with it, the moment you start accessing files over a network file system (NFS on Unix, or SMB mounts in Windows) a lot of things that seemed to be atomic before no longer are.
If you want to have a second process calculating checksums while a first process is writing the file, you may want to open a pipe between the two and have the first process write a copy of everything down the pipe to the checksumming process. That may be faster than dealing with locking.
On Mac OS X, I have a process which produces JSON objects, and another intermittent process which should consume them. The producer and consumer processes are independent of each other. Objects will be produced no more often than every 5 seconds, and will typically be several hundred bytes, but may range up into megabytes sometimes. The objects should be communicated first-in-first-out. The consumer may or may not be running when the producer is producing, and may or may not read objects immediately.
My boneheaded solution is
Create a directory.
Producer writes each JSON object to a text file, names it with a serial number.
When Consumer launches, it reads and then deletes files in serial-number order, and while it is running, uses FSEvents to watch this directory for new files arriving.
Is there any easier or better way to do this?
The modern way to do this, as of Lion, is to use XPC. Unfortunately, there's no good documentation of it; there's a broad overview in the Daemons and Services guide and a primitive HeaderDoc-generated reference, but the best way to get introduced to it is to watch the session about it from last year's WWDC sessions.
With XPC, you won't have to worry about keeping serial numbers serial, having to contend for a spinning disk, or whether there's enough disk space. Indeed, you don't even have to generate and parse JSON data at all, since XPC's communication mechanism is built around JSON-esque/plist-esque container and value objects.
Assuming you want the consumer to see the old files, this is the way it's been done since the beginning of time - loathsome though it may be.
There's lots of highish tech things that look cleaner - but honestly, they just tend to add complexity and/or deployment infrastructure that add hassle. What you suggest works, and it works well, and it's easy to write and maintain. You might need some kind of sentinel files to track what you are doing for crash recovery, but that's probably about it.
Hell, most people would just poll with a sleep 5. At least you are all all up in the fsevent.
Now if it was accepable to lose the events generated when the listener wasn't around; and perf was paramount - it could get more interesting. :)
I have an ISAPI filter that runs on IIS6 or 7. When there are multiple worker processes ("Web garden"), the filter will be loaded and run in each w3wp.exe.
How can I efficiently allow the filter to log its activities in a single consolidated logfile?
log messages from the different (concurrent) processes must not interfere with each other. In other words, a single log message emitted from any of the w3wp.exe must be realized as a single contiguous line in the log file.
there should be minimal contention for the logfile. The websites may serve 100's of requests per second.
strict time ordering is preferred. In other words if w3wp.exe process #1 emits a message at t1, then process #2 emits a message at t2, then process #1 emits a message at t3, the messages should appear in proper time order in the log file.
The current approach I have is that each process owns a separate logfile. This has obvious drawbacks.
Some ideas:
nominate one of the w3wp.exe to be "logfile owner" and send all log messages through that special process. This has problems in case of worker process recycling.
use an OS mutex to protect access to the logfile. Is this high-perf enough? In this case each w3wp.exe would have a FILE on the same filesystem file. Must I fflush the logfile after each write? Will this work?
any suggestions?
At first I was going to say that I like your current approach best, because each process shares nothing, and then I realized, that, well, they are probably all sharing the same hard drive underneath. So, there's still a bottleneck where contention occurs. Or maybe the OS and hard drive controllers are really smart about handling that?
I think what you want to do is have the writing of the log not slow down the threads that are doing the real work.
So, run another process on the same machine (lower priority?) which actually writes the log messages to disk. Communicate to the other process using not UDP as suggested, but rather memory that the processes share. Also known, confusingly, as a memory mapped file. More about memory mapped files. At my company, we have found memory mapped files to be much faster than loopback TCP/IP for communication on the same box, so I'm assuming it would be faster than UDP too.
What you actually have in your shared memory could be, for starters, an std::queue where the pushs and pops are protected using a mutex. Your ISAPI threads would grab the mutex to put things into the queue. The logging process would grab the mutex to pull things off of the queue, release the mutex, and then write the entries to disk. The mutex is only protected the updating of shared memory, not the updating of the file, so it seems in theory that the mutex would be held for a briefer time, creating less of a bottleneck.
The logging process could even re-arrange the order of what it's writing to get the timestamps in order.
Here's another variation: Contine to have a separate log for each process, but have a logger thread within each process so that the main time-critical thread doesn't have to wait for the logging to occur in order to proceed with its work.
The problem with everything I've written here is that the whole system - hardware, os, the way multicore CPU L1/L2 cache works, your software - is too complex to be easily predictable by a just thinking it thru. Code up some simple proof-of-concept apps, instrument them with some timings, and try them out on the real hardware.
Would logging to a database make sense here?
I've used a UDP-based logging system in the past and I was happy with this kind of solution.
The logs are sent via UDP to a log-collector process which his in charge of saving it to a file on a regular basis.
I don't know if it can work in your high-perf context but I was satisfied with that solution in a less under-stress application.
I hope it helps.
Rather than an OS Mutex to control access to the file, you could just use Win32 file locking mechanisms with LockFile() and UnlockFile().
My suggestion is send messages asynchronously (UDP) to a process that will take charge of recording the log.
The process will :
- one thread receiver puts the messages in a queue;
- one thread is responsible for removing the messages from the queue putting in a time ordered list;
- one thread monitor messages in the list and only messages with time length greater than the minimum should be saved in file (to prevent a delayed message to be written out of order).
You could continue logging to separate files and find/write a tool to merge them later (perhaps automated, or you could just run it at the point you want to use the files.)
Event Tracing for Windows, included in Windows Vista and later, provides a nice capability for this.
Excerpt:
Event Tracing for Windows (ETW) is an efficient kernel-level tracing facility that lets you log kernel or application-defined events to a log file. You can consume the events in real time or from a log file and use them to debug an application or to determine where performance issues are occurring in the application.