I was pondering language features and I was wondering if the following feature had been implemented in any languages.
A way of declaring that an object may only be accessed within a Mutex. SO for example in java you would only be able to access an object if it was in a synchrnoised block and in C# a Lock.
A compiler error would ensue if the object was used outside of a Mutex block.
Any thoughts?
UPDATE
I think some people have misunderstood the question, I'm not asking if you can lock objects, I'm asking if there is a mechanism to state at declaration of an object that it may only be accessed from within a lock/synchronised statement.
There are two ways to do that.
Your program either refuses to run a method unless the protecting mutex is locked by the calling thread (that's a runtime check); or it refuses to compile (that's a compile time check).
First way is what C# lock does.
Second method requires a compiler able to evaluate every execution path possible. It's hardly feasible.
In Java you can add the synchronized keyword to a method, but that is only syntactic sugar to wrapping the entire method body in a synchronized(this)-block (for non-static methods).
So for Java there is no language construct that enforces that behavior. You can try to .wait() on this with a zero timeout to ensure that the calling code has acquired the monitor, but that's just checking after-the-fact
In Objective-C, you can use the #property and #synthesize directives to let the compiler generate the code for accessors. By default they are protected by mutex.
Demanding locks on everything as you describe would create the potential for deadlocks, as one might be forced to take a lock sooner than one would otherwise.
That said, there are approaches similar to what you describe - Software Transactional Memory, in particular, avoids the deadlock issue by allowing rollbacks and retries.
Related
I've allocated some GPU global memory with cudaMalloc(), say, in the constructor of some class. Now it's time to destruct the instance I've constructed, and I have my instance's data pointer. The thing is, I'm worried maybe some mischievous code elsewhere has called cudaDeviceReset(), after which my cudaFree() will probably fail (I'll get an invalid device pointer error). So, how can can I tell whether my pointer is elligible for cudaFree()ing?
I don't believe you can do much about that.
About the best you can do is try and engineer the lifespan of objects which will call the CUDA APIs in their destructors to do so before context destruction. In practice, that means having them fall of of scope in a well defined fashion before the context is automatically or manually torn down.
For a call like cudaFree(), which is somewhat "fire and forget" anyway, the best thing to do might be to write your own wrapper for the call and explicitly catch and tastefully ignore any obvious error conditions which would arise if the call was made after context destruction-
Given what talonmies says, one might consider doing the converse:
wrap your cudaDeviceReset() calls to also regard a 'generation counter'.
Counter increases will be protected by a lock.
While you lock, you reset and increment the generation counter.
Wrap cudaMalloc() to also keep the generation index (you might need a class/struct for that) - obtained during allocation (which also locks).
Wrap cudaFree() to lock and only really cudaFree() if the reset generation has not changed.
... now, you might say "Is all that locking worth it? At worst, you'll get an error, it's not such a big deal." And, to be honest - I'm not sure it's worth it. You could make this somewhat less painful by using a Reader-Writer lock instead of a simple lock, where the allocate and free are just readers that can all access concurrently.
I know there are no destructors in Go since technically there are no classes. As such, I use initClass to perform the same functions as a constructor. However, is there any way to create something to mimic a destructor in the event of a termination, for the use of, say, closing files? Right now I just call defer deinitClass, but this is rather hackish and I think a poor design. What would be the proper way?
In the Go ecosystem, there exists a ubiquitous idiom for dealing with objects which wrap precious (and/or external) resources: a special method designated for freeing that resource, called explicitly — typically via the defer mechanism.
This special method is typically named Close(), and the user of the object has to call it explicitly when they're done with the resource the object represents. The io standard package does even have a special interface, io.Closer, declaring that single method. Objects implementing I/O on various resources such as TCP sockets, UDP endpoints and files all satisfy io.Closer, and are expected to be explicitly Closed after use.
Calling such a cleanup method is typically done via the defer mechanism which guarantees the method will run no matter if some code which executes after resource acquisition will panic() or not.
You might also notice that not having implicit "destructors" quite balances not having implicit "constructors" in Go. This actually has nothing to do with not having "classes" in Go: the language designers just avoid magic as much as practically possible.
Note that Go's approach to this problem might appear to be somewhat low-tech but in fact it's the only workable solution for the runtime featuring garbage-collection. In a language with objects but without GC, say C++, destructing an object is a well-defined operation because an object is destroyed either when it goes out of scope or when delete is called on its memory block. In a runtime with GC, the object will be destroyed at some mostly indeterminate point in the future by the GC scan, and may not be destroyed at all. So if the object wraps some precious resource, that resource might get reclaimed way past the moment in time the last live reference to the enclosing object was lost, and it might even not get reclaimed at all—as has been well explained by #twotwotwo in their respective answer.
Another interesting aspect to consider is that the Go's GC is fully concurrent (with the regular program execution). This means a GC thread which is about to collect a dead object might (and usually will) be not the thread(s) which executed that object's code when it was alive. In turn, this means that if the Go types could have destructors then the programmer would need to make sure whatever code the destructor executes is properly synchronized with the rest of the program—if the object's state affects some data structures external to it. This actually might force the programmer to add such synchronization even if the object does not need it for its normal operation (and most objects fall into such category). And think about what happens of those exernal data strucrures happened to be destroyed before the object's destructor was called (the GC collects dead objects in a non-deterministic way). In other words, it's much easier to control — and to reason about — object destruction when it is explicitly coded into the program's flow: both for specifying when the object has to be destroyed, and for guaranteeing proper ordering of its destruction with regard to destroying of the data structures external to it.
If you're familiar with .NET, it deals with resource cleanup in a way which resembles that of Go quite closely: your objects which wrap some precious resource have to implement the IDisposable interface, and a method, Dispose(), exported by that interface, must be called explicitly when you're done with such an object. C# provides some syntactic sugar for this use case via the using statement which makes the compiler arrange for calling Dispose() on the object when it goes out of the scope declared by the said statement. In Go, you'll typically defer calls to cleanup methods.
One more note of caution. Go wants you to treat errors very seriously (unlike most mainstream programming language with their "just throw an exception and don't give a fsck about what happens due to it elsewhere and what state the program will be in" attitude) and so you might consider checking error returns of at least some calls to cleanup methods.
A good example is instances of the os.File type representing files on a filesystem. The fun stuff is that calling Close() on an open file might fail due to legitimate reasons, and if you were writing to that file this might indicate that not all the data you wrote to that file had actually landed in it on the file system. For an explanation, please read the "Notes" section in the close(2) manual.
In other words, just doing something like
fd, err := os.Open("foo.txt")
defer fd.Close()
is okay for read-only files in the 99.9% of cases, but for files opening for writing, you might want to implement more involved error checking and some strategy for dealing with them (mere reporting, wait-then-retry, ask-then-maybe-retry or whatever).
runtime.SetFinalizer(ptr, finalizerFunc) sets a finalizer--not a destructor but another mechanism to maybe eventually free up resources. Read the documentation there for details, including downsides. They might not run until long after the object is actually unreachable, and they might not run at all if the program exits first. They also postpone freeing memory for another GC cycle.
If you're acquiring some limited resource that doesn't already have a finalizer, and the program would eventually be unable to continue if it kept leaking, you should consider setting a finalizer. It can mitigate leaks. Unreachable files and network connections are already cleaned up by finalizers in the stdlib, so it's only other sorts of resources where custom ones can be useful. The most obvious class is system resources you acquire through syscall or cgo, but I can imagine others.
Finalizers can help get a resource freed eventually even if the code using it omits a Close() or similar cleanup, but they're too unpredictable to be the main way to free resources. They don't run until GC does. Because the program could exit before next GC, you can't rely on them for things that must be done, like flushing buffered output to the filesystem. If GC does happen, it might not happen soon enough: if a finalizer is responsible for closing network connections, maybe a remote host hits its limit on open connections to you before GC, or your process hits its file-descriptor limit, or you run out of ephemeral ports, or something else. So it's much better to defer and do cleanup right when it's necessary than to use a finalizer and hope it's done soon enough.
You don't see many SetFinalizer calls in everyday Go programming, partly because the most important ones are in the standard library and mostly because of their limited range of applicability in general.
In short, finalizers can help by freeing forgotten resources in long-running programs, but because not much about their behavior is guaranteed, they aren't fit to be your main resource-management mechanism.
There are Finalizers in Go. I wrote a little blog post about it. They are even used for closing files in the standard library as you can see here.
However, I think using defer is more preferable because it's more readable and less magical.
I was wondering if calling Write() on an os.File is thread safe. I'm having a hard time finding any mention of thread safety in the docs.
The convention (at least for the standard library) is the following: No function/method is safe for concurrent use unless explicitly stated (or obvious from the context).
It is not safe to write concurrently to an os.File via Write() without external synchronization.
After browsing the source code a little bit I found the following method which is eventually called by file.Write(). Since there are race condition checks in place, I'm assuming that the call is in fact not thread-safe within Go (Source).
However, it seemed unlikely that those system calls wouldn't be thread-safe on an OS level. After some browsing I came upon this interesting answer that fueled my suspicions even more. For windows the source indicates a call to WriteFile which also appears to be thread safe.
In what order are objects in a .vbs destroyed?
That is, given these globals:
Set x = New Xxx
Set y = New Yyy
I'm interested in answers to any of the following.
For instances of classes implemented in the .VBS, in what order will Class_Terminate be called? Cursory poking suggests in the order (not reverse order!) of creation, but is this guaranteed?
EDIT: I understand that Class_Terminate will be called when the last last reference to an object is released. What I meant was: in what order will x and y be released, and is it guaranteed? Assume for simplicity that x & y are the only references to their respective objects.
Does the type of object matter? e.g. if I have classes implemented in the .VBS mixed in with other COM objects such as Scripting.FileSystemObject.
EDIT: I understand that a COM library may set up its own internal circular references that the script host engine knows nothing about; I'm interested in exploring what could affect the answer to the first question.
Are the answers to the above different if x and y were local to a Sub or Function rather than global?
Does it depend on whether the exit is normal, by exception, or via WScript.Quit? (In the latter case, it seems that Class_Terminate is still called on any outstanding objects before exiting, however these may cause an error to be reported).
When is the WScript object destroyed?
Does the script host matter? (wscript.exe vs cscript.exe vs. whatever the web host engine is called)
Does JScript's object destruction model differ to VBScript's?
I can find the answers to some of these questions empirically, but I'm interested in whether any of them are guaranteed / documented.
Do post even if you only know some of the answers - or further relevant issues.
I designed and implemented this feature in VBScript.
Most of the answers are in my articles that Mark references, but just to clarify:
in what order will Class_Terminate be called?
Terminators are in general called immediately when the last reference to an object is released. However, due to circular references and other issues, it is generally a very bad idea to rely upon a deterministic order of termination.
Cursory poking suggests in the order (not reverse order!) of creation, but is this guaranteed?
As I noted in my articles, unterminated objects are terminated when the engine is shut down. As an implementation detail, the termination queue is executed in the order that the objects were created in. However, this is an undocumented implementation detail that you should not rely upon.
Does the type of object matter? e.g. if I have classes implemented in the .VBS mixed in with other COM objects such as Scripting.FileSystemObject.
It can. There could be circular references amongst those objects that are torn down at unpredictable times.
I'm thinking of objects at global scope, when the program quits - is it different for objects at e.g. function scope?
I don't understand the question. Can you clarify?
Does it depend on whether the exit is normal, by exception, or via WScript.Quit? (In the latter case, it seems that Class_Terminate is still called on any outstanding objects before exiting, however these may cause an error to be reported).
It can matter, yes. VBScript does not make any guarantee that terminators always run. The host that owns the engine can shut down its process by "failing fast" in a manner that is not guaranteed to cleanly shut down the engine, for example. (In the event of a catastrophic failure, this is sometimes desirable; if you don't know what is wrong then sometimes running termination code makes the problem worse, not better.)
Windows Script Host does attempt to shut down the engine cleanly when Quit is called.
When is the WScript object destroyed?
When the Windows Script Host process termination logic runs.
Does the script host matter? (wscript.exe vs cscript.exe vs. whatever the web host engine is called)
Yes, it can matter.
Does JScript's object destruction model differ to VBScript's?
Yes, very much so.
JScript "Classic" from the period when I worked on it (pre 2001) uses a nondeterministic mark-and-sweep garbage collector which does handle circular references amongst script objects, but does NOT handle circular references between script and browser objects. More recent versions of JScript "Classic" have a modified garbage collector that DOES handle circular references between script and browser objects (though it does not necessarily detect circularities involving JScript objects and third party ActiveX objects.)
The IE 9 version of JScript has a completely rewritten garbage collector that uses very different technology; I have chatted a bit with its designer but I do not have enough technical knowledge to discuss its characteristics in any kind of depth.
JScript .NET of course uses the CLR garbage collector.
Can I ask why you care about all this stuff?
Also, note that I haven't looked at this code in over a decade; take all of this with the appropriate level of skepticism. My memory may be faulty.
I realise that I can't access Form controls from the DoWork event handler of a BackgroundWorker. (And if I try to, I get an Exception, as expected).
However, am I allowed to access other (custom) objects that exist on my Form?
For instance, I've created a "Settings" class and instantiated it in my Form and I seem to be able to read and write to its properties.
Is it just luck that this works?
What if I had a static class? Would I be able to access that safely?
#Engram:
You've got the gist of it - CrossThreadCalls are just a nice feature MS put into the .NET Framework to prevent the "bonehead" type of parallel programming mistakes. It can be overridden, as I'm guessing you've already found out, by setting the "AllowCrossThreadCalls" property on the class (and not on an instance of the class, e.g. set Label.AllowCrossThreadCalls and not lblMyLabel.AllowCrossThreadCalls).
But more importantly, you're right about the need to use some kind of locking mechanism. Whenever you have multiple threads of execution (be it threads, processes or whatever), you need to make sure that when you have one thread reading/writing to a variable, you probably don't want some other thread barging and changing that value under the feet of the first thread.
The .NET Framework actually provides several other mechanisms which might be more useful, depending on circumstances, than locking in code. The first is to use a Monitor class, which has the effect of locking a particular object. When you use this, other threads can continue to execute, as long as they don't try to lock that same object. Another very useful and common parallel-programming idea is the Mutex (or Semaphore). The Mutex is basically like a game of Capture the Flag between your threads. If one thread grabs the flag, no other threads can grab it until the first thread drops it. (A Semaphore is just like a Mutex, except that there can be more than one flag in a game.)
Obviously, none of these concepts will work in every particular problem - but having a few more tools to help you out might come in handy some day :)
You should communicate to the user interface through the ProgressChanged and RunWorkerCompleted events (and never the DoWork() method as you have noted).
In principle, you could call IsInvokeRequired, but the designers of the BackgroundWorker class created the ProgressChanged callback event for the purpose of updating UI elements.
[Note: BackgroundWorker events are not marshaled across AppDomain boundaries. Do not use a BackgroundWorker component to perform multithreaded operations in more than one AppDomain.]
MSDN Ref.
Ok, I've done some more research on this and I think have an answer. (Let the votes decide if I'm right!)
The answer is.. you can access any custom object that's in scope, however your access will not be thread-safe.
To ensure that it is thread-safe you should probably be using lock. The lock keyword prevents more than one thread executing a particular piece of code. (Subject to actually using it properly!)
The Cross Threading Exception that occurs when you try and access a Control is a safety mechanism designed especially for Controls. (It's easier and probably more efficient to get the user to make thread-safe calls then it is to design the controls themselves to be thread-safe).
You can't access controls that where created in one thread from another thread.
You can either use Settings class that you mentioned, or use InvokeRequired property and Invoke methods of control.
I suggest you look at the examples on those pages:
http://msdn.microsoft.com/en-us/library/ms171728.aspx
http://msdn.microsoft.com/en-us/library/system.windows.forms.control.invokerequired.aspx