Peeking inside the chrome protocol in Firefox - firefox

I was wondering whether it is possible to look inside the input stream
when the "chrome://" protocol is used in Firefox. Let me be more
clear. Let's take the following call sequence, for example:
nsXULDocument.cpp has a nsXULDocument::ResumeWalk() method.
It calls LoadScript() [around line:3004].
LoadScript() calls NS_NewStreamLoader [nsXULDocument.cpp, line
3440].
NS_NewStreamLoader calls NS_NewChannel() [nsNetUtil.h, line:593].
NS_NewChannel() then calls ioservice->NewChannelFromURI()
[nsNetUtil.h, line:226].
NewChannelFromURI() calls NewChannelFromURIWithProxyFlags()
[nsIOService.cpp line:596].
NewChannelFromURIWithProxyFlags() calls handler->newChannel() which
is resolved at runtime to become nsChromeProtocolHandler->newChannel()
[nsChromeProtocolHandler.cpp, line:182].
This in turn calls ioServ->NewChannelFromURI()
[nsChromeProtocolHandler.cpp, line:196].
Step 6 is repeated.
Step 7 is repeated, however, at different times, it can load
different handlers based on the protocol (chrome, jar, file etc.)
My intention for describing the above call sequence was to set up the
context for my problem. I want to know when "chrome://" protocol is
used, and when it is used, I want to process the input stream. For
example, if Firefox is loading a script like "chrome://package/content/
script.js" I want to intercept the file when it is accessed from the
disk. After intercepting the file, I might change it's contents, or
dump the file's content in a folder of my choice.
So, whenever Firefox reads a file (using a method like fread(),
probably, I would like to know that as well), I would like to determine whether the read request was from the chrome protocol or not, and at that moment I can make some changes to the file based on my needs. Any helps regarding this?

For those who've stumbled here curious about the 'chrome' protocol, here are some references that may be handy:
- Chrome Protocol, part of SPDY
- Lets make the web faster project

Related

Is it possible to use Windows Overlapped IO to wait for another process to write to a file?

Say I want to write a tail like application for Windows to monitor a bunch of files. Such an application should report when some of the monitored files is updated by any other application.
It can be assumed that the files being monitored are being constantly appended by other processes, but not modified in any other way. Before implementing some pooling solution (that is, iterate through the files to be monitored, seek to the end of each one, record this pointer, compare to previous end etc.) I would appreciate if someone more experienced with the Overlapped IO could tell me if I can make use of it.
For instance, is it possible to write the monitoring application in such a way that it opens all the files that need to be monitored, seek to the end of them, and try to read one byte with ReadFileEx() registering a callback.
Is there a way to make this work so that when another process write to some of the files the proper callback is invoked? Or necessarily the monitoring application will always get an EOF for such a call?
Is this approach a sensible one? Or is it a bad idea?

Confusion about rubys IO#(read/write)_nonblock calls

I am currently doing the Ruby on the Web project for The Odin Project. The goal is to implement a very basic webserver that parses and responds to GET or POST requests.
My solution uses IO#gets and IO#read(maxlen) together with the Content-Length Header attribute to do the parsing.
Other solution use IO#read_nonblock. I googled for it, but was quite confused with the documentation for it. It's often mentioned together with Kernel#select, which didn't really help either.
Can someone explain to me what the nonblock calls do differently than the normal ones, how they avoid blocking the thread of execution, and how they play together with the Kernel#select method?
explain to me what the nonblock calls do differently than the normal ones
The crucial difference in behavior is when there is no data available to read at call time, but not at EOF:
read_nonblock() raises an exception kind of IO::WaitReadable
normal read(length) waits until length bytes are read (or EOF)
how they avoid blocking the thread of execution
According to the documentation, #read_nonblock is using the read(2) system call after O_NONBLOCK is set for the underlying file descriptor.
how they play together with the Kernel#select method?
There's also IO.select. We can use it in this case to wait for availability of input data, so that a subsequent read_nonblock() won't cause an error. This is especially useful if there are multiple input streams, where it is not known from which stream data will arrive next and for which read() would have to be called.
In a blocking write you wait until bytes got written to a file, on the other hand a nonblocking write exits immediately. It means, that you can continue to execute your program, while operating system asynchronously writes data to a file. Then, when you want to write again, you use select to see whether the file is ready to accept next write.

Ruby file handle management (too many open files)

I am performing very rapid file access in ruby (2.0.0 p39474), and keep getting the exception Too many open files
Having looked at this thread, here, and various other sources, I'm well aware of the OS limits (set to 1024 on my system).
The part of my code that performs this file access is mutexed, and takes the form:
File.open( filename, 'w'){|f| Marshal.dump(value, f) }
where filename is subject to rapid change, depending on the thread calling the section. It's my understanding that this form relinquishes its file handle after the block.
I can verify the number of File objects that are open using ObjectSpace.each_object(File). This reports that there are up to 100 resident in memory, but only one is ever open, as expected.
Further, the exception itself is thrown at a time when there are only 10-40 File objects reported by ObjectSpace. Further, manually garbage collecting fails to improve any of these counts, as does slowing down my script by inserting sleep calls.
My question is, therefore:
Am I fundamentally misunderstanding the nature of the OS limit---does it cover the whole lifetime of a process?
If so, how do web servers avoid crashing out after accessing over ulimit -n files?
Is ruby retaining its file handles outside of its object system, or is the kernel simply very slow at counting 'concurrent' access?
Edit 20130417:
strace indicates that ruby doesn't write all of its data to the file, returning and releasing the mutex before doing so. As such, the file handles stack up until the OS limit.
In an attempt to fix this, I have used syswrite/sysread, synchronous mode, and called flush before close. None of these methods worked.
My question is thus revised to:
Why is ruby failing to close its file handles, and how can I force it to do so?
Use dtrace or strace or whatever equivalent is on your system, and find out exactly what files are being opened.
Note that these could be sockets.
I agree that the code you have pasted does not seem to be capable of causing this problem, at least, not without a rather strange concurrency bug as well.

How to identify file being closed is modified or created in action KAUTH_FILEOP_CLOSE from Mac KEXT

Observed that FWRITE or KAUTH_FILEOP_CLOSE_MODIFIED is not consistenly set in action KAUTH_FILEOP_CLOSE during file modification or file copy.
My usecase is - I am trying to figure out whether the file being closed is modified file or newly created file. I want to ignore files that are not modified.
As per documentation, I am checking for KAUTH_FILEOP_CLOSE_MODIFIED flag when the file action is KAUTH_FILEOP_CLOSE. Most of the time, I have observed KAUTH_FILEOP_CLOSE_MODIFIED is not set when file is copied from one location to other or file is modified.
I also observed that FWRITE flag is set, but not consistently for modified or copied files. I am just wondering why the behavior is so inconsistent.
Another way I was thinking was to rely on vnode actions KAUTH_VNODE_WRITE_DATA, But I have observed that there KAUTH_VNODE_WRITE_DATA multiple calls comes after KAUTH_FILEOP_CLOSE and even when file is not modified.
Any idea why such behavior exist?
Thanks in advance.
Regards,
Rupesh
KAuth and especially KAUTH_FILEOP_CLOSE_MODIFIED is buggy, and I already reported some problems related to it to Apple (a long time ago):
Events happening on file descriptor inherited from parent process seem to not trigger call to the KAuth callback at all. (See http://www.openradar.me/8898118)
The flag KAUTH_FILEOP_CLOSE_MODIFIED is not specified when the given file has transparent zlib compression enabled. (See http://www.openradar.me/23029109)
That said, I am quite confident that (as of 10.5.x, 10.6.x, 10.7.x) the callbacks are always called directly from the kernel thread doing the syscall. For example when open(2) is called, it calls the kauth callbacks for the vnode context and then (if allowed by return value) calls the VFS driver to realize the operation. The fileop (KAUTH_FILEOP_CLOSE) works also from the same thread but is just called after the closing itself.
Hence I don't think KAUTH_VNODE_WRITE_DATA can come after KAUTH_FILEOP_CLOSE for the same event.
Either you have a bug in your code, or it is another event (e.g. next open of the same file after it was closed in the same or other process.)
Still there are some traps you must be aware of:
Any I/O which is performed by kernel itself (including other kexts) does not trigger the kauth callbacks at all.
If there are multiple callbacks for the vnode context (e.g. from multiple Kexts), kernel then calls them one by one for every event. However as soon as some of them returns KAUTH_RESULT_ALLOW or KAUTH_RESULT_DENY, it finally decides and the rest of the callbacks is not called. I.e. all callbacks are called only if all of them but the last return KAUTH_RESULT_DEFER. (AFAIK, for fileop this is not true, because in this case the return value is completely ignored.)

Move or copy and truncate a file that is in use

I want to be able to (programmatically) move (or copy and truncate) a file that is constantly in use and being written to. This would cause the file being written to would never be too big.
Is this possible? Either Windows or Linux is fine.
To be specific what I'm trying to do is log video with FFMPEG and create hour long videos.
It is possible in both Windows and Linux, but it would take cooperation between the applications involved. If the application that is writing the new data to the file is not aware of what the other application is doing, it probably would not work (well ... there is some possibility ... back to that in a moment).
In general, to get this to work, you would have to open the file shared. For example, if using the Windows API CreateFile, both applications would likely need to specify FILE_SHARE_READ and FILE_SHARE_WRITE. This would allow both (multiple) applications to read and write the file "concurrently".
Beyond sharing the file, though, it would also be necessary to coordinate the operations between the applications. You would need to use some kind of locking mechanism (either by locking some part of the file or some shared mutex/semaphore). Note that if you use file locking, you could lock some known offset in the file to act as a "semaphore" (it can even be a byte value beyond the physical end of the file). If one application were appending to the file at the same exact time that the other application were truncating it, then it would lead to unpredictable results.
Back to the comment about both applications needing to be aware of each other ... It is possible that if both applications opened the file exclusively and kept retrying the operations until they succeeded, then perform the operation, then close the file, it would essentially allow them to work without "knowledge" of each other. However, that would probably not work very well and not be very efficient.
Having said all that, you might want to consider alternatives for efficiency reasons. For example, if it were possible to have the writing application write to new files periodically, it might be more efficient than having to "move" the data constantly out of one file to another. Also, if you needed to maintain some portion of the file (e.g., move out the first 100 MB to another file and then move the second 100 MB to the beginning) that could be a fairly expensive operation as well.
logrotate would be a good option is linux, comes stock on just about any distro. I'm sure there's a similar windows service out there somewhere

Resources