Swift assign function to var cause retain cycle? - swift2

I met a similar question in Swift Memory Management: Storing func in var but that didn't solve my problem.
Here is my class definition:
class Test {
var block: (() -> Int)?
func returnInt() -> Int {
return 1
}
deinit {
print("Test deinit")
}
}
I tried two ways to assign value to block property and got completely different result. The second approach didn't cause retain circle, which is quite unexpected:
var t = Test()
// This will lead to retain cycle
// t.block = t.returnInt
// I thought this will also lead to retain cycle but actually didn't
t.block = {
return t.returnInt()
}
t = Test()
In my opinion, variable t is captured by block while block is a property of t, so can anyone explain why there isn't a retain cycle?

In Swift, all captured variables are captured by reference (in Apple Blocks terminology, all captured local variables are __block). So the t inside the block is shared with the t outside the block; the block does not hold an independent copy of t.
Originally, there is a retain cycle in the second case too, as the block holds a reference to this shared copy of t, and t points to the first Test object, and that Test object's block property points to the block. However, when you re-assign the shared variable t (which is visible both inside and outside the block), you break the retain cycle, because t no longer points to the first Test object.
In the first case, the t is effectively captured by value, because the t is evaluated immediately in the expression t.returnInt rather than be captured as a variable in a block. So a reassignment of t outside the block later has no effect on the block, and does not break the retain cycle. So you can think of
t.block = t.returnInt
as kind of like
let tmp = t
t.block = {
return tmp.returnInt()
}

Related

ScalaFx children hierarchy and casting / instance reference

I'm wandering if this is the optimal way of doing it with ScalaFx: A GUI is composed of bunch of nodes, to which I suck content from a SQL-DB. Main Pane is a FlowPane populated with few hundred elements. Each element is composed of four level hierarchy (see numbers describing the levels):
1 2 3 4
VBox -+-> VBox ---> StackPane -+-> ImageView
+-> Label +-> Rectangle
As far as I have experienced the I can access the nodes and their attributes in different levels. Ie. I can give user feedback by changing the Rectangle color below the ImageView Node as the compound element is chosen by mouse click or by ContextMenu.
I could access the Rectangle attributes directly, but it is easy to make mistakes as the list references children.get(0) are directly dependent from order of the children as the nodes are positioned in parent.
val lvone = vbnode.children // VBox (main)
val lvtwo = lvone.get(0) // VBox
val lvthree = lvtwo.asInstanceOf[javafx.scene.layout.VBox].children.get(0) // StackPane
val lvfour = lvthree.asInstanceOf[javafx.scene.layout.StackPane].children.get(0) // Rectangle
if (lvfour.isInstanceOf[javafx.scene.shape.Rectangle]) lvfour.asInstanceOf[javafx.scene.shape.Rectangle].style = "-fx-fill: #a001fc;"
println("FOUR IS:"+lvfour.getClass)
Here's sample to demonstrate the "safer" access to the elements in node hierarchy (node hierarchy creation is in rather annoying structure of code, so it is not included):
val levelone = vbnode.children
println("LV1 Node userData:"+vbnode.userData) // my database reference for the main / container element
println("LV1 Parent children class:"+levelone.get(0).getClass) // class javafx.scene.layout.VBox
for (leveltwo <- levelone) {
println("LV2 Children Class:"+leveltwo.getClass)
println("LV2 Children Class Simple Name:"+leveltwo.getClass.getSimpleName) // VBox
if (leveltwo.getClass.getSimpleName == "VBox") {
leveltwo.style = "-fx-border-width: 4px;" +
"-fx-border-color: blue yellow blue yellow;"
for (levelthree <- leveltwo.asInstanceOf[javafx.scene.layout.VBox].children) {
println("LV3 children:"+levelthree.getClass.getName)
if (levelthree.getClass.getSimpleName == "StackPane") {
for (levelfour <- levelthree.asInstanceOf[javafx.scene.layout.StackPane].children) {
println("LV4 children:"+levelfour.getClass.getName)
if (levelfour.getClass.getSimpleName == "Rectangle") {
if (levelfour.isInstanceOf[javafx.scene.shape.Rectangle]) println("Rectangle instance confirmed")
println("LV4 Found a Rectangle")
println("original -fx-fill / CSS:"+ levelfour.asInstanceOf[javafx.scene.shape.Rectangle].style)
levelfour.asInstanceOf[javafx.scene.shape.Rectangle].style = "-fx-fill: #a001fc;"
} // end if
} // end for levelfour
} // end if
} // end for levelthree
} // end if
} // end for leveltwo
Questions:
Is there smarter way to do the type casting of node types, since only javafx API based references are acceptable (BTW I'm using ScalaIDE)? Options I am using are:
1- simple / shortcut way: evaluation by using leveltwo.getClass.getSimpleName == "VBox" , which is the shortcut from API jungle. But is it efficient and safe?
2- cluttering way by using probably the by the book style:
if (levelfour.isInstanceOf[javafx.scene.shape.Rectangle])
Other question: Now in reference to the fully qualified reference based on javafx ie. javafx.scene.shape.Rectangle, I would like to use scala reference instead, but I get an error which enforces me to adopt the javafx based reference. Not a big deal as I can use javafx reference, but I wander if there is scalafx based option?
Happy to get constructive feedback.
If I understand you correctly, you seem to be wanting to navigate the nodes of a sub-scene (that belongs to a higher-level UI element construct) in order to change the appearance of some of the nodes within it. Do I have that right?
You raise a number of different issues, all within the one question, so I'll do my best to address them all. As a result, this is going to be a long answer, so please bear with me. BTW, In future, it would help if you ask one question for each issue. ;-)
Firstly, I'm going to take your problem at face value: that you need to browse through a scene in order to identify a Rectangle instance and change its style. (I note that your safe version also changes the style of the second VBox, but I'm going to ignore that for the sake of simplicity.) This is a reasonable course of action if you have little to no control over the structure of each element's UI. (If you directly control this structure, there are far better mechanisms, which I'll come to later.)
At this point, it might be worth expanding on the relationship between ScalaFX and JavaFX. The former is little more than a set of wrappers for the latter, to give the library a Scala flavor. In general, it works like this: the ScalaFX version of a UI class takes a corresponding JavaFX class instance as an argument; it then applies Scala-like operations to it. To simplify things, there are implicit conversions between the ScalaFX and JavaFX instances, so that it (mostly) appears to work by magic. However, to enable this latter feature, you must add the following import to each of your source files that reference ScalaFX:
import sclafx.Includes._
For example, if JavaFX has a javafx.Thing (it doesn't), with setSize and getSize accessor methods, then the ScalaFX version would look like this:
package scalafx
import javafx.{Thing => JThing} // Rename to avoid confusion with ScalaFX Thing.
// ScalaFX wrapper for a Thing.
class Thing(val delegate: JThing) {
// Axilliary default constructor. Let's assume a JThing also has a default
// constructor.
//
// Creates a JavaFX Thing when we don't have one available.
def this() = this(new JThing)
// Scala-style size getter method.
def size: Int = delegate.getSize
// Scala-style size setter method. Allows, say, "size = 5" in your code.
def size_=(newSize: Int): Unit = delegate.setSize(newSize)
// Etc.
}
// Companion with implicit conversions. (The real implementation is slightly
// different.)
object Thing {
// Convert a JavaFX Thing instance to a ScalaFX Thing instance.
implicit def jfxThing2sfx(jThing: JThing): Thing = new Thing(jThing)
// Convert a ScalaFX Thing instance to a JavaFX Thing instance.
implicit def sfxThing2jfx(thing: Thing): JThing = thing.delegate
}
So, quite a lot of work for very little gain, in all honesty (although ScalaFX does simplify property binding and application initialization). Still, I hope you can follow me here. However, this allows you to write code like the following:
import javafx.scene.shape.{Rectangle => JRectangle} // Avoid ambiguity
import scalafx.Includes._
import scalafx.scene.shape.Rectangle
// ...
val jfxRect: JRectangle = new JRectangle()
val sfxRect: Rectangle = jfxRect // Implicit conversion to ScalaFX rect.
val jfxRect2: JRectangle = sfxRect // Implicit conversion to JavaFX rect.
// ...
Next, we come to type checking and casting. In Scala, it's more idiomatic to use pattern matching instead of isInstanceOf[A] and asInstanceOf[A] (both of which are frowned upon).
For example, say you have a Node and you want to see if it is actually a Rectangle (since the latter is a sub-class of the former). In the style of your example, you might write something like the following:
def changeStyleIfRectangle(n: Node): Unit = {
if(n.isInstanceOf[Rectangle]) {
val r = n.asInstanceOf[Rectangle]
r.style = "-fx-fill: #a001fc;"
}
else println("DEBUG: It wasn't a rectangle.")
}
The more idiomatic Scala version of the same code would look like this:
def changeStyleIfRectangle(n: Node): Unit = n match {
case r: Rectangle => r.style = "-fx-fill: #a001fc;"
case _ => println("DEBUG: It wasn't a rectangle.")
}
This may seem a little finicky, but it tends to result in simpler, cleaner code, as I hope you'll see. In particular, note that case r: Rectangle only matches if that is the real type of n, and it then casts n to r as a Rectangle.
BTW, I would expect that comparing types is more efficient than getting the name of the class, via getClass.getSimpleName and comparing to a string, and there's less chance of error. (For example, if you mistype the class name of the string you're comparing to, e.g. "Vbox", instead of "VBox", then this will not result in a compiler error, and the match will always fail.)
As you point out, your direct approach to identifying the Rectangle is limited by the fact that it requires a very specific scene structure. If you change how each element is represented, then you must change your code accordingly, or you'll get a bunch of exceptions.
So let's move on to your safe approach. Clearly, it's going to be a lot slower and less efficient than the direct approach, but it still relies upon the structure of the scene, even if it's less sensitive to the order in which the children are added at each level of hierarchy. If we change the hierarchy, it will likely stop working.
Here's an alternative approach that uses the class hierarchy of the library to assist us. In a JavaFX scene, everything is a Node. Furthermore, nodes that have children (such as VBox and StackPane) are subclasses of Pane as well. We'll use a recursive function to browse the elements below a specified starting Node instance: every Rectangle it encounters will have its style changed.
(BTW, in this particular case, there are some issues with implicit conversions, which makes a pure ScalaFX solution a little cumbersome, so I'm going to match directly on the JavaFX versions of the classes instead, renamed to avoid any ambiguity with the equivalent ScalaFX types. The implicit conversions will work fine when calling this function.)
import javafx.scene.{Node => JNode}
import javafx.scene.layout.{Pane => JPane}
import javafx.scene.shape.{Rectangle => JRectangle}
import scala.collection.JavaConverters._
import scalafx.Includes._
// ...
// Change the style of any rectangles at or below starting node.
def setRectStyle(node: JNode): Unit = node match {
// If this node is a Rectangle, then change its style.
case r: JRectangle => r.style = "-fx-fill: #a001fc;"
// If the node is a sub-class of Pane (such as a VBox or a StackPane), then it
// will have children, so apply the function recursively to each child node.
//
// The observable list of children is first converted to a Scala list to simplify
// matters. This requires the JavaConverters import above.
case p: JPane => p.children.asScala.foreach(setRectStyle)
// Otherwise, just ignore this particular node.
case _ =>
}
// ...
A quick few observations on this function:
You can now use any hierarchy of UI nodes that you like, however, if you have more than one Rectangle node, it will change the style of all of them. If this doesn't work for you, you could add code to check other attributes of each Rectangle to determine which one to modify.
The asScala method is used to convert the children of the Pane node to a Scala sequence, so we can then use the foreach higher-order function to recursively pass each child in turn to the setRectStyle method. asScala is made available by the import scala.collection.JavaConverters._ statement.
Because the function is recursive, but the recursive call is not in the tail position (the last statement of the function), it is not tail-recursive. What this means is if you pass a huge scene to the function, you might get a StackOverflowException. You should be fine with any reasonable size of scene. (However, as an exercise, you might want to write a tail-recursive version so that the function is stack safe.)
This code is going to get slower and less efficient the bigger the scene becomes. Possibly not your top concern in UI code, but a bad smell all the same.
So, as we've seen, having to browse through a scene is challenging, inefficient and potentially error prone. Is there a better way? You bet!
The following will only work if you have control over the definition of the scene for your data elements. If you don't, you're stuck with solutions based upon the above.
The simplest solution is to retain a reference to the Rectangle whose style you want to change as part of a class, then access it directly as needed. For example:
import scalafx.Includes._
import scalafx.scene.control.Label
import scalafx.scene.layout.{StackPane, VBox}
import scalafx.scene.shape.Rectangle
final class Element {
// Key rectangle whose style is updated when the element is selected.
private val rect = new Rectangle {
width = 600
height = 400
}
// Scene representing an element.
val scene = new VBox {
children = List(
new VBox {
children = List(
new StackPane {
children = List(
// Ignore ImageView for now: not too important.
rect // Note: This is the rectangle defined above.
)
}
)
},
new Label {
text = "Some label"
}
)
}
// Call when element selected.
def setRectSelected(): Unit = rect.style = "-fx-fill: #a001fc;"
// Call when element deselected (which I assume you'll require).
def setRectDeselected(): Unit = rect.style = "-fx-fill: #000000;"
}
Clearly, you could pass a data reference as an argument to the class and use that to populate the scene as you like. Whenever you need to change the style, calling one of the two latter functions achieves what you need with surgical precision, no matter what the scene structure looks like.
But there's more!
One of the truly great features about ScalaFX/JavaFX is that it has observable properties that can be used to make the scene manage itself. You will find that most fields on a UI node are of some type "Property". What this allows you to do is to bind a property to the field, such that when you change the property, you change the scene accordingly. When combined with event handlers, the scene takes care of everything all by itself.
Here, I've reworked the latter class. Now, it has a handler that detects when the scene is selected and deselected and reacts by changing the property that defines the style of the Rectangle.
import scalafx.Includes._
import scalafx.beans.property.StringProperty
import scalafx.scene.control.Label
import scalafx.scene.input.MouseButton
import scalafx.scene.layout.{StackPane, VBox}
import scalafx.scene.shape.Rectangle
final class Element {
// Create a StringProperty that holds the current style for the Rectangle.
// Here we initialize it to be unselected.
private val unselected = "-fx-fill: #000000;"
private val selected = "-fx-fill: #a001fc;"
private val styleProp = new StringProperty(unselected)
// A flag indicating whether this element is selected or not.
// (I'm using a var, but this is heavily frowned upon. A better mechanism might be
// required in practice.)
private var isSelected = false
// Scene representing an element.
val scene = new VBox {
children = List(
new VBox {
children = List(
new StackPane {
children = List(
// Ignore ImageView for now: not too important.
// Key rectangle whose style is bound to the above property.
new Rectangle {
width = 600
height = 400
style <== styleProp // <== means "bind to"
}
)
}
)
},
new Label {
text = "Some label"
}
)
// Add an event handler. Whenever the VBox (or any of its children) are
// selected/unselected, we just change the style property accordingly.
//
// "mev" is a "mouse event".
onMouseClicked = {mev =>
// If this is the primary button, then change the selection status.
if(mev.button == MouseButton.Primary) {
isSelected = !isSelected // Toggle selection setting
styleProp.value = if(isSelected) selected
else unselected
}
}
}
}
Let me know how you get on...

blocks and the stack

According to bbum:
2) Blocks are created on the stack. Careful.
Consider:
typedef int(^Blocky)(void);
Blocky b[3];
for (int i=0; i<3; i++)
b[i] = ^{ return i;};
for (int i=0; i<3; i++)
printf("b %d\n", b[i]());
You might reasonably expect the above to output:
0
1
2
But, instead, you get:
2
2
2
Since the block is allocated on the stack, the code is nonsense. It
only outputs what it does because the Block created within the lexical
scope of the for() loop’s body hasn’t happened to have been reused for
something else by the compiler.
I don't understand that explanation. If the blocks are created on the stack, then after the for loop completes wouldn't the stack look something like this:
stack:
---------
^{ return i;} #3rd block
^{ return i;} #2nd block
^{ return i;} #1st block
But bbum seems to be saying that when each loop of the for loop completes, the block is popped off the stack; then after the last pop, the 3rd block just happens to be sitting there in unclaimed memory. Then somehow when you call the blocks the pointers all refer to the 3rd block??
You are completely misunderstanding what "on the stack" means.
There is no such thing as a "stack of variables". The "stack" refers to the "call stack", i.e. the stack of call frames. Each call frame stores the current state of the local variables of that function call. All the code in your example is inside a single function, hence there is only one call frame that is relevant here. The "stack" of call frames is not relevant.
The mentioning of "stack" means only that the block is allocated inside the call frame, like local variables. "On the stack" means it has lifetime akin to local variables, i.e. with "automatic storage duration", and its lifetime is scoped to the scope in which it was declared.
This means that the block is not valid after the end of the iteration of the for-loop in which it was created. And the pointer you have to the block now points to an invalid thing, and it is undefined behavior to dereference the pointer. Since the block's lifetime is over and the space it was using is unused, the compiler is free to use that place in the call frame for something else later.
You are lucky that the compiler decided to place a later block in the same place, so that when you try to access the location as a block, it produces a meaningful result. But this is really just undefined behavior. The compiler could, if it wanted, place an integer in part of that space and another variable in another part, and maybe a block in another part of that space, so that when you try to access that location as a block, it will do all sorts of bad things and maybe crash.
The lifetime of the block is exactly analogous to a local variable declared in that same scope. You can see the same result in a simpler example that uses a local variable that reproduces what's going on:
int *b[3];
for (int i=0; i<3; i++) {
int j = i;
b[i] = &j;
}
for (int i=0; i<3; i++)
printf("b %d\n", *b[i]);
prints (probably):
b 2
b 2
b 2
Here, as in the case with the block, you are also storing a pointer to something that is scoped inside the iteration of the loop, and using it after the loop. And again, just because you're lucky, the space for that variable happens to be allocated to the same variable from a later iteration of the loop, so it seems to give a meaningful result, even though it's just undefined behavior.
Now, if you're using ARC, you likely do not see what your quoted text says happening, because ARC requires that when storing something in a variable of block-pointer type (and b[i] has block-pointer type), that a copy is made instead of a retain, and the copy is stored instead. When a stack block is copied, it is moved to the heap (i.e. it is dynamically allocated, and has dynamic lifetime and is memory managed like other objects), and it returns a pointer to the heap block. This you can safely use after the scope.
Yeah, that does make sense, but you really have to think about it. When b[0] is given its value, the "^{ return 0;}" is never used again. b[0] is just the address of it. The compiler kept overwriting those temp functions on the stack as it went along, so the "2" is just the last function written in that space. If you print those 3 addresses as they are created, I bet they are all the same.
On the other hand, if you unroll your assignment loop, and add other references to "^{ return 0;}", like assigning it to a c[0], and you'll likely see b[0] != b[1] != b[2]:
b[0] = ^{ return 0;};
b[1] = ^{ return 1;};
b[2] = ^{ return 2;};
c[0] = ^{ return 0;};
c[1] = ^{ return 1;};
c[2] = ^{ return 2;};
Optimization settings could affect the outcome.
By the way, I don't think bbum is saying the pop happens after the for loop completion -- it's happening after each iteration hits that closing brace (end of scope).
Mike Ash provides the answer:
Block objects [which are allocated on the stack] are only valid through the lifetime of their
enclosing scope
In bbum's example, the scope of the block is the for-loop's enclosing braces(which bbum omitted):
for (int i=0; i<3; i++) {#<------
b[i] = ^{ return i;};
}#<-----
So, each time through the loop, the newly created block is pushed onto the stack; then when each loop ends, the block is popped off the stack.
If you print those 3 addresses as they are created, I bet they are all
the same.
Yes, I think that's the way that it must have worked in the past. However, now it appears that a loop does not cause the block to be popped off the stack. Now, it must be the method's braces that determine the block's enclosing scope. Edit: Nope. I constructed an experiment, and I still get different addresses for each block:
AppDelegate.h:
typedef int(^Blocky)(void); #******TYPEDEF HERE********
#interface AppDelegate : NSObject <NSApplicationDelegate>
#end
AppDelegate.m:
#import "AppDelegate.h"
#interface AppDelegate ()
-(Blocky)blockTest:(int)i {
Blocky myBlock = ^{return i;}; #If the block is allocated on the stack, it should be popped off the stack at the end of this method.
NSLog(#"%p", myBlock);
return myBlock;
}
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification {
// Insert code here to initialize your application
Blocky b[3];
for (int i=0; i < 3; ++i) {
b[i] = [self blockTest:i];
}
for (int j=0; j < 3; ++j) {
NSLog(#"%d", b[j]() );
}
}
#end
--output:--
0x608000051820
0x608000051850
0x6080000517c0
0
1
2
That looks to me like blocks are allocated on the heap.
Okay, my results above are due to ARC. If I turn off ARC, then I get different results:
0x7fff5fbfe658
0x7fff5fbfe658
0x7fff5fbfe658
2
1606411952
1606411952
That looks like stack allocation. Each pointer points to the same area of memory because after a block is popped off the stack, that area of memory is reused for the next block.
Then it looks like when the first block was called it just happened to get the correct result, but by the time the 2nd block was called, the system had overwritten the reclaimed memory resulting in a junk value? I'm still not clear on how calling a non-existent block results in a value??

AS2, Referencing a Changing Object Name

so I was wondering if there was a way to reference different objects on stage with he same method to save repeating lots of lines of code. This is what I have right now
function bossKilled(i:Number):Void {
trace("Boss Killed!");
kills ++;
_root.bossDeath.gotoAndPlay(2);
_root["pirate"+i+"Active"] = false; //name of variable would be pirate1Active
_root["pirate"+(i+1)+"Active"] = true; //name of variable would be pirate2Active
bossDeath._x = _root["pirate"+i+"Active"]._x;
bossDeath._y = _root["pirate"+i+"Active"]._y; }
However, this reference does not actually affect the variables. I was wondering if this was possible, and if so, what am I doing wrong?
Thanks.
Not sure what you try to achieve ... pirate1Active is a BOOL. A BOOL has no _x or _y property (nor any other).
If you are not sure where to find your objects in the object tree, you can use the debugger or add some traces on the MCs timeline, like trace (_parent);
Consider switching to AS3, it is much more object oriented and has better tools support.

Passing NSTextField Pointer to IOUSBInterfaceInterface182 Callback

I'm doing an asynchronous read from a USB printer. The read works correctly. My trouble is updating a NSTextField from within the callback.
-(IBAction)printTest:(id)sender
{
// Setup... then:
NSLog(#"starting async read: %#", _printerOutput);
NSLog(#"_printerOutput pointer = %p", _printerOutput);
result = (*interface)->ReadPipeAsyncTO(interface,
1,
readBuffer,
numBytesRead,
500,
1000,
USBDeviceReadCompletionCallback,
&(_printerOutput)
);
The callback is defined as:
void USBDeviceReadCompletionCallback(void *refCon, IOReturn result, void *messageArg)
{
NSTextField *printerOutput = (__bridge NSTextField *) messageArg;
NSLog(#"_printerOutput pointer = %p", printerOutput);
}
The pointer loses its value when inside of the callback.
starting async read: <NSTextField: 0x10221dc60>
_printerOutput pointer = 0x10221dc60
_printerOutput pointer = 0x0
I've looked in many places trying to mimic different ways to pass in the pointer. There can be only one correct way. :)
Another variation on the theme: (__bridge void *)(_printerOutput). This doesn't work, either.
I understand that the callback is of type IOAsyncCallback1.
Other URLs of note:
http://www.google.com/search?client=safari&rls=en&q=another+usb+notification+example&ie=UTF-8&oe=UTF-8 and updating UI from a C function in a thread
I presume _printerOutput is an NSTextField*?
First, is there a particular reason why are you passing an NSTextField** into the callback? (Note the ampersand in the last argument you're passing to ReadPipeAsyncTO.)
Second, I'd avoid ARC with sensitive code, just as a precaution.
Third, from what I see, last argument of ReadPipeAsyncTO is called refcon. Is it a coincidence that callback's first argument is called refCon? Note you're trying to get a text field from messageArg, not refCon.
To extend on my third point…
ReadPipeAsyncTO has an argument called refcon. This is the last argument.
Please pass _printerOutput there. Not a pointer to _printerOutput (do not pass &(_printerOutput)) -- _printerOutput is already a pointer.
Now finally. Look at the first argument of the callback. It's called refcon. In fact -- let's see what Apple docs say about this callback:
refcon
The refcon passed into the original I/O request
My conclusion is that your code should read:
void USBDeviceReadCompletionCallback(void *refCon, IOReturn result, void *messageArg)
{
NSTextField *printerOutput = (__bridge NSTextField *) refCon; // <=== the change is here
NSLog(#"_printerOutput pointer = %p", printerOutput);
}
Can you, please, try this out? I get a feeling that you didn't try this.
Small but possibly important digression: Were it some other object, and if you didn't use ARC, I'd suggest retaining the _printerOutput variable when passing it into ReadPipeAsyncTO, and releasing it in the callback.
But, since the text field should, presumably, have the lifetime of the application, there is probably no need to do so.
ARC probably loses track of the need for the object behind the pointer to exist once it's passed into C code, but it doesn't matter, since the pointer is still stored in the printerOutput property. Besides, once a pointer is in C code, nothing can just "follow it around" and "reset it".
Confusion when it comes to understanding and explaining the concepts is precisely why I said "avoid ARC with sensitive code". :-)

Is there anything wrong with this pattern for a JS library?

I admittedly know little about the inner workings of javascript, but need to make a library and would like to learn (hence asking here). I understand using the closure and exporting to window to not pollute the global namespace, but beyond that it confuses me a bit.
(function() {
var Drop = window.Drop = function() {
var files = [];
var add = function(word) {
files.push(word);
return files;
}
return {
files: files,
add: add
}
}
})()
// All of these seem to be the same?
var a = Drop();
var b = new Drop();
var c = new Drop;
// Each has their own state which is what I want.
a.add("file1");
b.add("file2");
c.add("file3");
Why are all three ways of "initializing" Drop the same?
What exactly gives them the ability to have their own state?
Is there an alternative to the return syntax to export those functions on Drop?
Is there just a flat out better best practice way of creating a self contained library like this?
I have searched around the net, but have found very little consistency on this subject.
The first way (Drop()) just calls the function as normal, so this is the global object (window in browser environments). It does its stuff and then returns an object, as you'd expect.
The second way (new Drop()) creates a new Drop object and executes the constructor with this set to that object. You do not, however, use this anywhere and return an object created from an object literal, so the Drop object is discarded and the object literal returned instead.
The third way (new Drop) is semantically the same as the second; it is only a syntactic difference.
They all have their own state because each time you call Drop, it has its own set of local variables distinct from the local variables of any other call to Drop.
You could transform your code to use the normal new syntax and prototypes. This has a few advantages: namely, you only create the add function once rather than one for each Drop call. Your modified code might look like this:
function Drop() {
this.files = [];
}
Drop.prototype.add = function(word) {
this.files.push(word);
return this.files;
};
By doing this, though, you lose being able to call it without new. There is, however, a workaround: You can add this as the first line inside function Drop:
if(!(this instanceof Drop)) {
return new Drop();
}
Since when you call it with new, this will be a Drop, and when you call it without new, this will be something other than a Drop, you can see if this is a Drop, and if it is, continue initializing; otherwise, reinvoke it with new.
There is also another semantic difference. Consider the following code:
var drop = new Drop();
var adder = drop.add;
adder(someFile);
Your code will work here. The prototype-based code will not, since this will be the global object, not drop. This, too, has a workaround: somewhere in your constructor, you can do this:
this.add = this.add.bind(this);
Of course, if your library's consumers are not going to pull the function out of the object, you won't need to do this. Furthermore, you might need to shim Function.prototype.bind for browsers that don't have it.
No. It's all a matter of taste.
Why are all three ways of "initializing" Drop the same?
// All of these seem to be the same?
var a = Drop();
var b = new Drop();
var c = new Drop;
When you use new in JavaScript to invoke a function, the value of this inside the function becomes the new object.
But the reason they're the same in your case is that you're not using this at all. You're making a separate object using object literal syntax, and returning it instead, so the new has no impact.
What exactly gives them the ability to have their own state?
Because each function invocation makes a new object, each object is entirely different for each invocation.
The functions assigned to the object are recreated in each Drop invocation, and therefore create a closure over the enclosing variable scope. As such, the files array of each invocation is continuously accessible to the functions made in each respective invocation.
Is there an alternative to the return syntax to export those functions on Drop?
Yes. Assign the functions and array to this, and remove the return statement. But that will require the use of new. Alternatively, put the functions on the .prototype object of Drop, and they'll be shared among all instances made using new, but keep the array assigned to this in the constructor so that it's not shared.
For the prototyped functions to reference the array, they would use this.files.
Is there just a flat out better best practice way of creating a self contained library like this?
JavaScript is very flexible. There are many ways to approach a single problem, each with its own advantages/disadvantages. Generally it'll boil down to taking advantage of closures, of prototypal inheritance, or some combination of both.
Here's a full prototypal inheritance version. Also, the outer (function() {})() isn't being used, so I'm going to add a variable to take advantage of it.
(function() {
var totalObjects = 0; // visible only to functions created in this scope
var Drop = window.Drop = function() {
this.files = [];
this.serialNumber = totalObjects++;
}
Drop.prototype.add = function(word) {
this.files.push(word);
return this.files;
};
})();

Resources