I'm reading the Gforth manual on memory allocation / deallocation, and this is something I cannot understand. Suppose I allocated a chunk of memory to hold four integers like this:
create foo 1 , 2 , 3 , 4 ,
Then, maybe I allocated more memory and perhaps deallocated some too, and now I want to deallocate foo. How do I do that? Doing foo free and foo 4 cells free results in an error.
One option is to use forget foo but that will 'deallocate' everything that you have defined since you defined foo, and worse than that Gforth doesn't implement it. In Gforth you have to use a 'marker', but this also will revert everything that happened after the marker.
For example (I'll show what you would get entering this into a Gforth interpreter, including the interpreter's responses (denoted by double asterisks)):
marker -unfoo **ok**
create foo 1 , 2 , 3 , 4 , **ok**
/ A test word to get the first thing in foo (1) back
: test foo # . ; **ok**
test **1 ok**
-unfoo **ok**
foo
**:8: Undefined word
>>>foo<<<
Backtrace:
$7FAA4EB4 throw
$7FAB1628 no.extensions
$7FAA502C interpreter-notfound1**
test
**:8: Undefined word
>>>test<<<
Backtrace:
$7FAA4EB4 throw
$7FAB1628 no.extensions
$7FAA502C interpreter-notfound1**
The example is meant to illustrate that foo and test are both gone after you execute -unfoo.
How this actually works is probably my moving the address that the interpreter is taking as the last thing added to the dictionary. -unfoo moves this back to before the address at which foo was added, which is equivalent to freeing the memory used by foo.
Here is another reference for this Starting Forth which is pretty excellent for picking up Forth in general.
In response to a comment on this answer:
This question is quite similar and this answer is pretty helpful. This is probably the most relevant part of the Gforth documentation.
The links above explain Forth versions of malloc(), free() and resize().
So in answer to your original question, you can use free but the memory that you free has to have been allocated by allocate or resize.
create adds an item to the dictionary and is as such not exactly what you want if you are going to want the memory back. My understanding of this, which may be incorrect is that you wouldn't normally remove things from the dictionary during the course of normal execution.
The best way to store a string depends on what you want to do with it. If you don't need it to exist for the lifetime of the programme you can just use s" by itself as this returns a length and an address.
In general, I would say that using create is quite a good idea but it does have limitations. If the string changes you will have to create a new dictionary entry for it. If you can set an upper bound on the string length, then once you have created a word you can go back and overwrite the memory that has been alloted for it.
This is another answer that I gave that gives an example of defining a string word.
So in summary, if you really do need to be able to deallocate the memory, use heap methods that Gforth provides (I think that they are in the Forth standard but I don't know if all Forths implement them). If you don't you can use the dictionary as per your question.
The CREATE ALLOT and VARIABLE words consume dictionary space (look it up in the ISO 93 standard.)
Traditionally you can
FORGET aap
, but that removes aap and each definition that is defined later than aap , totally different from free().
In complicated Forth's like gforth this simple mechanism no longer works. It amounted to truncating the linked list and resetting an allocation pointer (HERE/DP)
In gforth you are obliged to use MARKER. In putting
MARKER aap
you can use aap to remove aap and later defined words.
MARKER is cumbersome and it is much easier to restart your Forth.
Related
I want to get the last element of a lazy but finite Seq in Raku, e.g.:
my $s = lazy gather for ^10 { take $_ };
The following don't work:
say $s[* - 1];
say $s.tail;
These ones work but don't seem too idiomatic:
say (for $s<> { $_ }).tail;
say (for $s<> { $_ })[* - 1];
What is the most idiomatic way of doing this while keeping the original Seq lazy?
What you're asking about ("get[ing] the last element of a lazy but finite Seq … while keeping the original Seq lazy") isn't possible. I don't mean that it's not possible with Raku – I mean that, in principle, it's not possible for any language that defines "laziness" the way Raku does with, for example, the is-lazy method.
If particular, when a Seq is lazy in Raku, that "means that [the Seq's] values are computed on demand and stored for later use." Additionally, one of the defining features of a lazy iterable is that it cannot know its own length while remaining lazy – that's why calling .elems on a lazy iterable throws an error:
my $s = lazy gather for ^10 { take $_ };
say $s.is-lazy; # OUTPUT: «True»
$s.elems; # THROWS: «Cannot .elems a lazy list onto a Seq»
Now, at this point, you might reasonably be thinking "well, maybe Raku doesn't know how long $s is, but I can tell that it has exactly 10 elements in it." And you're not wrong – with that code, $s is indeed guaranteed to have 10 elements. This means that, if you want to get the tenth (last) element of $s, you can do so with $s[9]. And accessing $s's tenth element like that won't change the fact that $s.is-lazy.
But, importantly, you can only do so because you know something "extra" about $s, and that extra info undoes a good chunk of the reason you might want a list to be lazy in practice.
To see what I mean, consider a very similar Seq
my $s2 = lazy gather for ^10 { last if rand > .95; take $_ };
say $s2.is-lazy; # OUTPUT: «True»
Now, $s2probably has 10 elements, but it might not – the only way to know is to iterate through it and find out. In turn, this means $s2[9] does not jump to the tenth element the way $s[9] did; it iterates through $s2 just like you'd need to. And, as a result, if you run $s2[9], then $s2 will no longer be lazy (i.e., $s2.is-lazy will return False).
And this is, in effect, what you did in the code in your question:
my $s = lazy gather for ^10 { take $_ };
say $s.is-lazy; # OUTPUT: «True»
say (for $s<> { $_ }).tail; # OUTPUT: «9»
say $s.is-lazy; # OUTPUT: «False»
Because Raku cannot ever know that it has reached the tail of a lazy Seq, the only way it could tell you the .tail is to fully iterate $s. And that necessarily means that $s is no longer lazy.
Two complications
It's worth mentioning two adjacent topics that aren't actually related but that are close enough that they trip some people up.
First, nothing I've said about lazy iterables not knowing their length precludes some non-lazy iterables from knowing their length. Indeed, a decent number of Raku types do both the Iterator role and the PredictiveIterator role – and the main point of a PredictiveIterator is that it does know how many elements it can produce without needing to produce/iterate them. But PredictiveIterators cannot be lazy.
The second potentially confusing topic is closely related to the first: while no PredictiveIterator can be lazy (that is, none will ever have an .is-lazy method that returns True), some PredictiveIterators have behavior that is very similar to laziness – and, in fact, may even be colloquially referred to as "lazy".
I can't do a great job explaining this distinction because, quite honestly, I don't fully understand it myself. But I can give you an example: the .lines method on an IO::Handle. It's certainly the case that reading the lines of a huge file behaves a lot like it's dealing with a lazy iterable. most obviously, you can process each line without ever having the whole file in memory. And the docs even say that "lines are read lazily" with the .lines method.
On the other hand:
my $l = 'some-file-with-100_000-lines.txt'.IO.lines;
say $l.is-lazy; # OUTPUT: «False»
say $l.iterator ~~ PredictiveIterator; # OUTPUT: «True»
say $l.elems; # OUTPUT: «100000»
So I'm not quite sure whether it's fair to say that $l "is a lazy iterable", but if it is, it's "lazy" in a different way than $s was.
I realize that was a lot, but I hope it is helpful. If you have a more specific use case in mind for laziness (I bet it wasn't gathering the numbers from zero to nine!), I'd be happy to address that more specifically. And if anyone else can fill in some of the details with .lines and other lazy-not-lazy PredictiveIterators, I'd really appreciate it!
Drop the lazy
Lazy sequences in Raku are designed to work well as is. You don't need to emphasize they're lazy by adding an explicit lazy.
If you add an explicit lazy, Raku interprets that as a request to block operations such as .tail because they will almost certainly immediately render laziness moot, and, if called on an infinite sequence, or even just a sufficiently large one, hang or OOM the program.
So, either drop the lazy, or don't invoke operations like .tail that will be blocked if you do.
Expanded version of my original answer
As noted by #ugexe, the idiomatic solution is to drop the lazy.
Quoting my answer to the SO About Laziness:
if a gather is asked if it's lazy, it returns False.
Aiui, something like the following applies:
Some lazy sequence producers may be actually or effectively infinite. If so, calling .tail etc on them will hang the calling program. Conversely, other lazy sequences perform fine when all their values are consumed in one go. How should Raku distinguish between these two scenarios?
A decision was made in 2015 to let value producing datatypes emphasize or deemphasize their laziness via their response to an .is-lazy call.
Returning True signals that a sequence is not only lazy but wants to be known to be lazy by consuming code that calls .is-lazy. (Not so much end-user code but instead built in consuming features such as # sigilled variables handling an assignment trying to determine whether or not to assign eagerly.) Built in consuming features take a True as a signal they ought block calls like .tail. If a dev knows this is overly conservative, they can add an eager (or remove an unneeded lazy).
Conversely, a datatype, or even a particular object instance, may return False to signal that it does not want to be considered lazy. This may be because the actual behaviour of a particular datatype or instance is eager, but it might instead be that it is lazy technically, but doesn't want a consumer to block operations such as .tail because it knows they will not be harmful, or at least prefers to have that be the default presumption. If a dev knows better (because, say, it hangs the program), or at least does not want to block potentially problematic operations, they can add a lazy (or remove an unneeded eager).
I think this approach works well, but it doc and error messages mentioning "lazy" may not have caught up with the shift made in 2015. So:
If you've been confused by some doc about laziness, please search for doc issues with "lazy" in them, or "laziness", and add comments to existing issues, or file a new doc issue (perhaps linking to this SO answer).
If you've been confused by a Rakudo error message mentioning laziness, please search for Rakudo issues with "lazy" in them, and tagged [LTA] (which means "Less Than Awesome"), and add comments, or file a new Rakudo issue (with an [LTA] tag, and perhaps a link to this SO answer).
Further discussion
the docs ... say “If you want to force lazy evaluation use the lazy subroutine or method. Binding to a scalar or sigilless container will also force laziness.”
Yes. Aiui this is correct.
[which] sounds like it implies “my $x := lazy gather { ... } is the same as my $x := gather { ... }”.
No.
An explicit lazy statement prefix or method adds emphasis to laziness, and Raku interprets that to mean it ought block operations like .tail in case they hang the program.
In contrast, binding to a variable alters neither emphasis nor deemphasis of laziness, merely relaying onward whatever the bound producer datatype/instance has chosen to convey via .is-lazy.
not only in connection with gather but elsewhere as well
Yes. It's about the result of .is-lazy:
my $x = (1, { .say; $_ + 1 } ... 1000);
my $y = lazy (1, { .say; $_ + 1 } ... 1000);
both act lazily ... but $x.tail is possible while $y.tail is not.
Yes.
An explicit lazy statement prefix or method forces the answer to .is-lazy to be True. This signals to a consumer that cares about the dangers of laziness that it should become cautious (eg rejecting .tail etc.).
(Conversely, an eager statement prefix or method can be used to force the answer to .is-lazy to be False, making timid consumers accept .tail etc calls.)
I take from this that there are two kinds of laziness in Raku, and one has to be careful to see which one is being used where.
It's two kinds of what I'll call consumption guidance:
Don't-tail-me If an object returns True from an .is-lazy call then it is treated as if it might be infinite. Thus operations like .tail are blocked.
You-can-tail-me If an object returns False from an .is-lazy call then operations like .tail are accepted.
It's not so much that there's a need to be careful about which of these two kinds is in play, but if one wants to call operations like tail, then one may need to enable that by inserting an eager or removing a lazy, and one must take responsibility for the consequences:
If the program hangs due to use of .tail, well, DIHWIDT.
If you suddenly consume all of a lazy sequence and haven't cached it, well, maybe you should cache it.
Etc.
What I would say is that the error messages and/or doc may well need to be improved.
Does creating intermediate variables cause the garbage collector to do more work?
That is, is there any difference between:
output = :asdf.to_s.upcase
and
str = :asdf.to_s
output = str.upcase
? (Assume str is never referenced again.)
It would be a trivial amount of extra work when marking objects still referenced, assuming both str and output were still in scope (i.e. the binding where they exist was still active) when the GC mark phase began. Both variables would start a mark on the same string. I don't know, but suspect that when marking objects as still viable, if Ruby comes across an item already marked, it will probably stop recursing and go to its next item at the same level. In this case the String is a single object without child objects to mark further, so it's one quick call to rb_gc_mark repeated for each reference to the String - one case where it is marked, and another case where Ruby notes it has already been marked and stops recursing.
If neither variable were in any active binding when GC mark phase began, it is no extra work, the String referenced would not get marked (no work) and the sweep phase would delete it just once (same work no matter how many references were active before).
Let's say we have a Ruby class like this:
class MyClass
def my_method(p)
# do some cool stuff with a huge amount of objects
my_objects = ...
return my_objects
end
end
And somewhere else in the application there's a function that calls MyClass's my_method, pretty much like this:
def my_func
#doing some stuff ..
MyClass.my_method(some_param)
#doing other stuff ..
end
What happens to the list of objects, is it eligible for garbage collection? Is it possible to know roughly when it's going to be collected?
Is there a way to "mark" the list as eligible for GC? Maybe like this:
def my_func
#doing some stuff ..
objects = MyClass.my_method(some_param)
objects = nil #does this make any difference?
#doing other stuff ..
end
GC destroys all objects which are not being referenced by your code. By setting objects to nil, you change reference of the variable, hence objects will be GCed, but exactly same thing is going to happen if you go with the first code. The real question is - why do you need for this object to be GB at precise moment - it shouldn't affect your code at all.
If you really want to have better control over garbage collection you can look at GC class: http://www.ruby-doc.org/core-1.9.3/GC.html. Note that you can rerun GC.start, which will force GC to run at that precise moment (even if there is nothing to collect).
Items returned from the function are eligible for being collected once nothing else points to them.
So, if you ignore the return value and really nothing more remembers those objects, than yes, thay can be GC'ed.
So, if you store the result in objects variable, then the returned values will be 'pinned'**) as long as the objects variable still remembers them***). When you nil that variable, they will be released and pending for collection. Nilling that variable may speed up their collection, but does not necessarily have to. *)
UNLESS anything other still remembers them. If between the objects=f() and objects=nil you read the values from objects variable and pass them to other functions/methods, and if they happen to store those objects, then of course it will pin them too, and "nilling" will help a bit in releasing the resources but not cause any immediate collection.*)
(*) In general, in environments with GC, you never actually know when the GC will run and what will it collect. You just know that objects that were forgotten by everyone will eventually be automatically removed. Nothing more. Theoreticaly, GC may choose to not run at all if your machine has terabytes of free memory.
(**) in some environments (like .Net) "pinning" is a precise term. Here I said it like that just to help you imagine how it works. I do not mean real pinning of memory blocks for communication with lower-level libraries, etc.
(***) When where's an object A remembers object B which remembers object C, and if the "B" becomes forgotten and if only B (and noone else) rememebers the C, then both B and C are GC'ed. So, you don't have to nil the objects. If the thing that contains objects variable at some point becomes 'forgotten', then both the "outer thing", and "objects" and the "returned items" will be GC'ed. At least should be, if GC implementation is OK.
This leaves one more thing to say: I do not say about GC in Ruby 2.0. All I've said was about garbage collectors in general. It applies also to Java, .Net, ObjC (with GC) and others. If you need to know precisely what happens in Ruby 2.0 and what are the gory details of GC implementation - ask directly about that :)
We have all read about or heard about the stack class, but many of us have probably never found a reason to use the LIFO object. I am curious to hear of real world solutions that used this object and why.
http://msdn.microsoft.com/en-us/library/system.collections.stack.aspx
I recently saw an example where a programmer used a stack to keep track of his current position while traversing a hierarchical data source. As he moved down the hierarchy, he pushed his position identifier on to the stack and as he moved back up he popped items off the stack. I thought this was a very efficent way to keep track of his current position in a mamoth hierarchy. I had never seen this before.
Anyone else have any examples?
I've used them to keep track of Undo and Redo actions.
I use an interface something like this:
interface ICommand
{
void Execute();
void Undo();
string Description { get; }
}
Undo and Redo are both of type Stack<ICommand>. Then I create a concrete class for a given action. In the class's constructor, I pass in any information I'd need to hold on to. Execute does the action initially, and also redoes it; Undo undoes it, obviously. It works like this:
Undo an action: Pop the Undo stack and add to the Redo stack.
Redo an undone action: Pop the Redo stack and add to the Undo stack again.
Perform a new action: Add to the Undo stack and clear the Redo stack (since the state is no longer consistent).
I found that you have to take care that you're really undoing what was done. For instance, say you have a UI with two listboxes, and each has five items in it. Your action might be to click a button to move everything on the left list to the right list (so it now has ten, and the left list has zero).
The undo action is not to move everything back; the undo action is to move back only the five you actually moved, and leave the others.
Stacks are used whenever a stored procedure / sub-routine is called to store local variables and return address.
Stacks are used for expression evaluation (eg in a calculator, or your compiler), first the expression is converted to RPN then a simple stack machine is used to evaluate. This works as follows, when you see an operand push it on the stack. When you see an operator pop operands and evaluate.
example
5 6 + 3 *
steps-
see 5 push 5
see 6 push 6
see + pop 2 times and apply + get 11 push 11
see 3 push 3
see * pop 2 times and apply get 33 push 33
result is on the top of the stack.
If you have a recursive algorithm, you can typically rewrite them using a stack. (since recursive algorithms implicitly already use a stack)
You can validate string inputs that require balanced tokens. Think LISP:
(+ (- 3 2) (+ (+ 4 5) 11))
When you hit an opening paren:
stack.Push("(")
Then when you hit a closing paren:
stack.Pop()
If there are any tokens left in your stack when you're done, it's not balanced.
You can get fancier and validate proper nesting in inputs like HTML. In a highly-contrived example:
//see opening body
stack.Push("body")
//see opening div
stack.Push("div")
//see opening p
stack.Push("p")
///see closing div
if(!stack.Pop().Equal("div")){
//not balanced
}
I've used stacks for image processing, where the "processing language" must be specified in a URL. A stack-based form lets you represent a tree of operations in an easy-to-parse, easy-to-think-about form.
See:
http://www.hackification.com/2008/10/29/stack-based-processing-part-1/
and
http://www.hackification.com/2008/10/29/stack-based-processing-part-2/
In one real-life use, a postscript generator class has a "current_font" state, used as the font for any operations which draw text. Sometimes a function needs to set the font temporarily, but then have it go back to the way it was. We could just use a temporary variable to save and restore the font:
def draw_body
old_font = ps.get_font
ps.set_font('Helvetica 10')
draw_top_section
draw_bottom_section
ps.set_font(old_font)
end
But by the third time you've done that you'll want to stop repeating yourself. So let's let the ps object save and restore the font for us:
class PS
def save_font
old_font = get_font
end
def restore_font
set_font(old_font)
end
end
Now the caller becomes:
def draw_body
ps.save_font
ps.set_font('Helvetica 10')
draw_top_section
draw_bottom_section
ps.restore_font
end
That works fine, until we use the same pattern inside one of the subroutines called by draw_page:
def draw_top_section
ps.save_font
ps.set_font('Helvetica-bold 14')
# draw the title
ps.restore_font
# draw the paragraph
end
When draw_top_section calls "save_font", it clobbers the font that was saved by draw_page. It's time to use a stack:
def PS
def push_font
font_stack.push(get_font)
end
def pop_font
set_font(font_stack.pop)
end
end
And in the callers:
def draw_top_section
ps.push_font
ps.set_font('Helvetica-bold 14')
# draw the title
ps.pop_font
# draw the body
end
There are further refinements possible, such as having the PS class automatically save and restore the font, but it's not necessary to go into those to see the value of a stack.
I find stacks quite useful in multithreaded aplications to keep track of statuses in an inverse-time fashion...
Every thread puts a status message in a synchronized shared stack and you have kind of a "breadcrumb" of what has happened.
Not quite .NET but... it's my oppinion =)
Here's an implementation of a deep compare where a Stack is used to keep track of the path to the current object being compared.
C# implementation of deep/recursive object comparison in .net 3.5
I've also used it in similar types of code working with generating xpath statements for particular xml nodes.
To provide a specific example to illuminate what other people are commenting on: to implement a Z-machine interpreter three different stacks should be used. A call stack, and a couple different kinds of object stacks. (The specific requirements can be found here.) Note that, like all of these examples, while using a stack isn't strictly required, it is the obvious choice.
The call stack keeps track of recursive calls to subroutines, while the object stack is used to keep track of internal items.
In a computer graphics class (not .NET) we used a Stack to keep track of objects that were drawn on the screen. This allowed all the objects to be redrawn on the screen for each refresh as well as keeping track of the order or "z-layer" of each object, so when they moved they could overlap other objects.
I am working my way through Ferret (Ruby port of Lucene) code to solve
a bug. Ferret code is mainly a C extension to Ruby. I am running into
some issues with the garbage collector. I managed to fix it, but I
don't completely understand my fix =) I am hoping someone with deeper
knowledge of Ruby and C extension (this is my 3rd day with Ruby) can
elaborate. Thanks.
Here is the situation:
Some where in Ferret C code, I am returning a "Token" to Ruby land.
The code looks like
static VALUE get_token (...)
{
...
RToken *token = ALLOC(RToken);
token->text = rb_str_new2("some text");
return Data_Wrap_Struct(..., &frt_token_mark, &frt_token_free, token);
}
frt_token_mark calls rb_gc_mark(token->text) and frt_token_free
just frees the token with free(token)
In Ruby, this code correlates to the following:
token = #input.next
Basically, #input is set to some object, calling the next method on it
triggers the get_token C call, which returns a token object.
In Ruby land, I then do something like w = token.text.scan('\w+')
When I run this code inside a while 1 loop (to isolate my problem), at
some point (roughly when my ruby process mem footprint goes to 256MB,
probably some GC threshold), Ruby dies with errors like
scan method called on terminated object
Or just core dumps. My guess was that token.text was garbage collected.
I don't know enough about Ruby C extension to know what happens with
Data_Wrap_Struct returned objects. Seems to me the assignment in Ruby
land, token =, should create a reference to it.
My "work-around"/"fix" is to create a Ruby instance variable in the
object referred to by #input, and stores the token text in there, to
get an extra reference to it. So the C code looks like
RToken *token = ALLOC(RToken);
token->text = rb_str_new2(tk->text);
/* added code: prevent garbage collection */
rb_ivar_set(input, id_curtoken, token->text);
return Data_Wrap_Struct(cToken, &frt_token_mark, &frt_token_free, token);
So now I've created a "curtoken" in the input instance variable, and
saved a copy of the text there... I've taken care to remove/delete
this reference in the free callback of the class for #input.
With this code, it works in that I no longer get the terminated object
error.
The fix seems to make sense to me -- it keeps an extra ref in curtoken
to the token.text string so an instance of token.text won't be removed
until the next time #input.next is called (at which time a different
token.text replaces the old value in curtoken).
My question is: why did it not work before? Shouldn't
Data_Wrap_Structure return an object that, when assigned in Ruby land,
has a valid reference and not be removed by Ruby?
Thanks.
When the Ruby garbage collector is invoked, it has a mark phase and a sweep phase. The mark phase marks all objects in the system by marking:
all objects referenced by a ruby stack frame (e.g. local variables)
all globally accessible objects (e.g. referred to by a constant or global variable) and their children/referents, and
all objects referred to by a reference on the stack, as well as those objects' children/referents.
as well as a number of other objects that are not important to this discussion. The sweep phase then destroys any objects that are not accessible (i.e. those that were not marked).
Data_Wrap_Struct returns a reference to an object. As long as that reference is available to ruby code (e.g. stored in a local variable) or is on the stack (referred to by a local C variable), the object should not be swept.
It's looks like from what you've posted that token->text is getting garbage collected. But why is it getting collected? It must not be getting marked. Is the Token object itself getting marked? If it is, then token->text should be getting marked. Try setting a breakpoint or printing a message in the token's mark function to see.
If the token is not getting marked, then the next step is to figure out why. If it is getting marked, then the next step is to figure out why the string returned by the text() method is getting swept (maybe it's not the same object that is getting marked).
Also, are you sure that it is the token's text member that is causing the exception? Looking at:
http://github.com/dbalmain/ferret/blob/master/ruby/ext/r_analysis.c
I see that the token and the token stream both have text() methods. The TokenStream struct doesn't hold a reference to its text object (it can't, as it's a C struct with no knowledge of ruby). Thus, the Ruby object wrapping the C struct needs to hold the reference (and this is being done with rb_ivar_set).
The RToken struct shouldn't need to do this, because it marks its text member in its mark function.
One more thing: you may be able to reproduce this bug by calling GC.start explicitly in your loop rather than having to allocate so many objects that the garbage collector kicks in. This won't fix the problem but might make diagnosis simpler.
perhaps mark as volatile:
http://www.justskins.com/forums/chasing-a-garbage-collection-bug-98766.html
maybe your compile is keeping its reference in a registry instead of the stack...there is some way mentioned I think in README.EXT to force an object to never be GC'ed, but...the question still remains as to why it's being collected early...