Response.Filter making multiple calls to Write()? - httpresponse

For some time I am using custom Response.Filter for rewrite purposes. Since I never tested this module, I decided to run a few tests and spotted something I did not expect.
I realized that pages are written to stream using multiple calls to Write().
So does it mean that my rewrite logic is being called multiple times for the same chunk of html or html is actually divided in partitions?
Please help me understand how Write() works.

Pages are written in chunks.
Each chunk is sent trhough your filter exactly once.

Related

How to edit a bio write request

I'm writing a device-mapper which has to edit incoming writes, for example:
Incoming bio request contains a biovec with a page full of 0s, and I want to change it to all 1s.
I tried using bvec_kmap_local() to get the page address, and then I can read the data and if needed adjust it using memcpy or similar. Initial tests seemed to work, but if I execute things like mkfs on the created dm, I get a lot of segmentation faults. I narrowed down the problem that this only happens if I actually write something to the mapped page. However, I see no reason why it should cause this fault as I (after checking probably 100 times) don't access invalid memory. I'm wondering if my method of editing the write is actually correct?
Thanks! I can provide way more information if needed
So apparently you're not supposed to edit the pages of a bio request, since write submitters (like file systems) expect their data buffers not to be modified. Instead, you should create a new one with the page you want it to be, and submit it.

libtorrent new piece alerts

I am developing an application that will stream multimedia files over torrents.
The backend needs to serve new pieces to the frontend as they arrive.
I need a mechanism to get notified when new pieces have arrived and been verified. From what I can tell, I could do this using block_finished_alerts. I would keep track of which blocks have arrived for a given piece, and read the piece when all blocks have arrived.
This solution seems kind of roundabout and I was wondering if there was a better way.
What you're asking for is called piece_finished_alert. It's posted every time a new piece completes downloading and passes the hash-check. To read a piece from disk, you may use torrent_handle::read_piece() (and get the result in read_piece_alert).
However, if you want to stream media, you probably want to use torrent_handle::set_piece_deadline() and set the flag to send read_piece_alerts as pieces come in. This will invoke the built-in streaming feature of libtorrent.

dojo.io.script.get caching

I'm loading data using dojo.io.script.get. Size of each request can be big and I need to issue lots of them.
Question is, after data loaded and later dismissed is it cached by browser?
In other words, when I load some data that have content "myFunc('blah blah blah')". It will execute myFunc function. What happens to the browser memory after execution? If I say load it 100 times and size of each string within myFunc is say 1GB, will browser run out of memory?
Thanks.
Andrei
One of the things I have learned about Dojo is that the source code is a great reference.
My quick inspection of dojo/io/script.js shows that there is some logic involving dead code tags and destroying script tags so I guess it should protect against the memory leaks you mention. (Of course, you should always test this kind of stuff yourself, just to be sure).

Does Wordpress load-cache functions such as wp_get_nav_menu_object?

I've been asking myself this question for quite a while. Maybe someone has already done some digging (or is involved in WP) to know the answer.
I'm talking about storing objects from WP-functions in PHP variables, for the duration of a page load, e.g. to avoid having to query the database twice for the same result set.
I don't mean caching in the sense of pre-rendering dynamic pages and saving them in HTML format for faster retrieval.
Quite a few "template tags" (Wordpress functions) may be used multiple times in a theme during one page load. When a theme or plugin calls such a function, does WP run a database query every time to retrieve the necessary data, and does it parse this data every time to return the desired object?
Or, does the function store the its result in a PHP variable the first time it runs, and checks if it already exists before it queries the database or parses?
Examples include:
wp_get_nav_menu_object()
wp_get_nav_menu_items()
wp_list_categories()
wp_tag_cloud()
wp_list_authors()
...but also such important functions as bloginfo() or wp_nav_menu().
Of course, it wouldn't make much sense to cache any and all queries like post-related ones. But for the above examples (there are more), I believe it would.
So far, I've been caching these generic functions myself when a theme required the same function to be called more than once on a page, by writing my own functions or classes and caching in global or static variables. I don't see why I should add to the server load by running the exact same generic query more than a single time.
Does this sort of caching already exist in Wordpress?
Yes, for some queries and functions. See WP Object Cache. The relevant functions are wp_cache_get, wp_cache_set, wp_cache_add, and wp_cache_delete. You can find these functions being used in many places through the WordPress code to do exactly what you are describing.

Streaming files from EventMachine handler?

I am creating a streaming eventmachine server. I'm concerned about avoiding blocking IO or doing anything else to muck up the event loop.
From what I've read, ruby's non-blocking IO can be used to stream files in a non-blocking way, or I can call next_tick, but I'm a little unclear about which of these approaches is preferable.
Part of the problem is that I have not found a good explanation of non-blocking IO library functions in ruby.
Short version:
Assuming a long-lived network IO operation, several wall clock minutes of streaming per file, transfer, what is the best way to do this in eventmachine without gumming up the event loop?
while 1 do
file.read do |bytes|
#conn.send_data bytes
end
end
I understand that the above code will block and I'm wondering what to put in its place. Also, I cannot use the FileStreamer class that is part of eventmachine as is, because I need to manipulate the data after it's read but before it's sent.
I think you can still use FileStreamer. FileStreamer expects its first argument to be a Connection, but this is a loose contract. As long as you implement the methods that FileStreamer expects, it should work. Take a look at this
https://gist.github.com/f4d997c3eeb6bdc5a9f3
The methods you'll need to handle are send_data and send_file_data. You can perform your manipulations here. Then pass the result along to EM::Connection.
Also, from my reading of the code, the special property of FileStreamer is that it allocates a memory mapped file (unless the file is small). You could do essentially the same thing by opening a regular Ruby File, reading blocks out of it, doing your manipulation, and emulating the behavior of FileStreamer.stream_one_chunk. Which is basically:
Each iteration must either send some data to the Connection, or reschedule itself using next_tick
Data can be repeatedly written to the Connection until the outbound buffer is full (according to get_outbound_data_size)
Once the file has been fully read, it should be closed (of course)
In fact, it seems to me that you had better not use FileStreamer unless your file will comfortably fit in memory.
You can look at the EM::Protocols for ideas about how to transform the data as it is streaming through.

Resources