The constants WM_IME_COMPOSITION and WM_IME_KEYLAST both have the same value 0x010F. Seems like a windows program that receives one of these window messages would not know which message it received. So how would one write code to handle these messages?
I am asking out of curiosity. I do not need to handle either WM_IME_COMPOSITION or WM_IME_KEYLAST. Was simply looking through some documentation and noticed the strange thing that I do not understand.
WM_IME_KEYLAST is not a message identifier. It is a symbolic constant that marks the end of the WM_IME_* range of messages1. You see this pattern throughout the Windows SDK, e.g. there are WM_KEYFIRST and WM_KEYLAST symbols that describe the range of values related to key messages.
To answer your question: You do not handle WM_IME_KEYLAST, so there is no ambiguity. You can (and should) use it in expressions, where you want to handle a range of messages, for example.
1 Although in this particular case I'm not entirely sure, as there is no corresponding WM_IME_KEYFIRST symbol. WM_IME_KEYLAST might just be an obsolete name, where the then unused message value was reused for the new WM_IME_COMPOSITION message. Once shipped, you cannot go back and remove the symbol, so it's still there, but doesn't serve any purpose anymore.
In relationship to this thread, this is also what i am kind of trying to do but i have had a bit more leeway in this.
My problem is i am currently working on a defining program (for my ti-89 titanium) to write out the definitions of variables. However, considering i had indefinite amounts of variables to add, i thought using the define function over and over again would waste memory and processing power. So my thinking was Save the variable to another variable to be defined in a later portion of the program.
prompt x
lbl x_d_r
x_d_r->q:Goto def
lbl def
define expr(q)[1]=x
where x_d_r has no assigned value. So the program was supposed to use the defined string as a list value to be x. However the obvious error came about.
So i played around on the home screen and program screen for a bit and came across entry(1) and ans(1). See back on the ti-83 (or 84) i could basically go (If i remember correctly)
disp q*1
x->ans(1)
However ans(1) on a ti-89 titanium is based upon the last answer submitted to the homescreen. Even then, ans(1) or entry(1) gets replaced in the program by just that. Lucky me, i found a way to avoid this.
Prgm
expr(char(120)&char(22)&char(97)&char(110)&char(115)&char(40)&char(49)&char(41))
EndPrgm
For those that do not know, this is simply expressing x->ans(1) which is a way for the code to transmit ans(1) within a program without removing the code to say so.
But it still does not work as a value needs to be sent to the home screen in order for it to record properly. This is one of those advantages that the ti-84 or ti-83 i wish it still had on the titanium. So i have spent some time searching for ways how i can display values of q to the home screen from within a program.
So far i learned that functions when used straight from the home screen return the value of q to the same place. However i have no way of implementing this in an actual program as the function does not wish to transmit the value to the home screen, and its rather useless within the program.
Secondly i have found this website which details methods of such ways to return values to the homescreen. While method 1 seems to hold promise, i do not seem to have any way of accessing that folder/program. Most likely because it is one that he made and has not shared its location on the pdf. I do like the expr("q"&":stop"), but q is not evaluated out so maybe i would have to rework it somehow.
While this was happening, i thought some other ideas could be using the paste key within a program but i have no idea how to implement stuff found from getkey let alone how the second and grab buttons factor in.
Or i could somehow have the ans(1) look to someplace else other than the home screen. Preferably to the i/0 screen but maybe to some other list or data matrix.
Anybody have any ideas on how to relay a value to the homescreen be it through function, pasting or something, and have the program i defined earlier define it as a value?
UPDATE+1
Ok i am beginning to question if maybe i am making it more complex than it needs to be...
After all, i am only going for just x->x_d_r[1], which is already defined elsewhere. So does it beat x->q:Goto def
Lbl def
Define expr(q)=x
(Or something like that which calls to a history recording program to define values?)
in terms of processing speed and memory count?
Got it. See here for what i was really trying to do.
So as an explanation of what the main problem was again, i wanted to be able to post a string value of q to be defined by another value of x.
The expr( function is quite a powerful tool on the ti-89 and like the person in that other forum, underestimated it. See what the person was trying to do was
InputStr "Function:",f(x)
expr(f)→f(x)
And was later answered by reworking it as
InputStr "function", n
expr(n & "->f(x)")
The expression tool just simply expresses what is in the parentheses. So during break periods in school today, i reworked in my head thinking "What if i tried rewriting the parenthesis out so it reads Expr("x->"&String(q))?
Lo-and-behold it works. Tested it with the fuller version of define to get
td()
Prgm
Prompt X
x_d_r->q
expr("x->"&string(q)&"[1]")
Disp x_d_r[1]
Delvar x_d_r
EndPrgm
Tried, tested and true. Works all the way. See what i think is happening is that anything that is not within the quotes is evaluated immediately in an expression while the the quoted objects are simply expressed and added later in response to the "&" key. Furthermore it makes sense if i was to describe it more with english; "Express x to be stored into the string of q's respective table".
While for variables sake i would have to look into ways to make x_d_r local only to the program without compensating the fact that the x_d_r portion is not considered a store value when executing x_d_r->q. But knowing what i know now i could probably do
expr("q"+"x_d_r"&->a)
expr("x->"&string(a)-"q"&"[1]")
In order to bypass that problem.
I'm looking at a piece of very old VB6, and have come across usages such as
Form5!ProgressBar.Max = time_max
and
Form5!ProgressBar.Value = current_time
Perusing the answer to this question here and reading this page here, I deduce that these things mean the same as
Form5.ProgressBar.Max = time_max
Form5.ProgressBar.Value = current_time
but it isn't at all clear that this is the case. Can anyone confirm or deny this, and/or point me at an explanation in words of one syllable?
Yes, Form5!ProgressBar is almost exactly equivalent to Form5.ProgressBar
As far as I can remember there is one difference: the behaviour if the Form5 object does not have a ProgressBar member (i.e. the form does not have a control called ProgressBar). The dot-notation is checked at compile time but the exclamation-mark notation is checked at run time.
Form5.ProgressBar will not compile.
Form5!ProgressBar will compile but will give an error at runtime.
IMHO the dot notation is preferred in VB6, especially when accessing controls. The exclamation mark is only supported for backward-compatibility with very old versions of VB.
The default member of a Form is (indirectly) the Controls collection.
The bang (!) syntax is used for collection access in VB, and in many cases the compiler makes use of it to early bind things that otherwise would be accessed more slowly through late binding.
Far from deprecated, it is often preferable.
However in this case since the default member of Form objects is [_Default] As Object containing a reference to a Controls As Object instance, there is no particular advantage or disadvantage to this syntax over:
Form5("ProgressBar").Value
I agree that in this case however it is better to more directly access the control as a member of the Form as in:
Form5.ProgressBar.Value
Knowing the difference between these is a matter of actually knowing VB. It isn't simply syntactic though, the two "paths" do different things that get to the same result.
Hopefully this answer offers an explanation rather merely invoking voodoo.
I'm considering how to do automatic bug tracking and as part of that I'm wondering what is available to match source code line numbers (or more accurate numbers mapped from instruction pointers via something like addr2line) in one version of a program to the same line in another. (Assume everything is in some kind of source control and is available to my code)
The simplest approach would be to use a diff tool/lib on the files and do some math on the line number spans, however this has some limitations:
It doesn't handle cross file motion.
It might not play well with lines that get changed
It doesn't look at the information available in the intermediate versions.
It provides no way to manually patch up lines when the diff tool gets things wrong.
It's kinda clunky
Before I start diving into developing something better:
What already exists to do this?
What features do similar system have that I've not thought of?
Why do you need to do this? If you use decent source version control, you should have access to old versions of the code, you can simply provide a link to that so people can see the bug in its original place. In fact the main problem I see with this system is that the bug may have already been fixed, but your automatic line tracking code will point to a line and say there's a bug there. Seems this system would be a pain to build, and not provide a whole lot of help in practice.
My suggestion is: instead of trying to track line numbers, which as you observed can quickly get out of sync as software changes, you should decorate each assertion (or other line of interest) with a unique identifier.
Assuming you're using C, in the case of assertions, this could be as simple as changing something like assert(x == 42); to assert(("check_x", x == 42)); -- this is functionally identical, due to the semantics of the comma operator in C and the fact that a string literal will always evaluate to true.
Of course this means that you need to identify a priori those items that you wish to track. But given that there's no generally reliable way to match up source line numbers across versions (by which I mean that for any mechanism you could propose, I believe I could propose a situation in which that mechanism does the wrong thing) I would argue that this is the best you can do.
Another idea: If you're using C++, you can make use of RAII to track dynamic scopes very elegantly. Basically, you have a Track class whose constructor takes a string describing the scope and adds this to a global stack of currently active scopes. The Track destructor pops the top element off the stack. The final ingredient is a static function Track::getState(), which simply returns a list of all currently active scopes -- this can be called from an exception handler or other error-handling mechanism.
The title may not really explain what I'm really trying to get at, couldn't really think of a way to describe what I mean.
I was wondering if it is good practice to check the arguments that a function accepts for nulls or empty before using them. I have this function which just wraps some hash creation like so.
Public Shared Function GenerateHash(ByVal FilePath As IO.FileInfo) As String
If (FilePath Is Nothing) Then
Throw New ArgumentNullException("FilePath")
End If
Dim _sha As New Security.Cryptography.MD5CryptoServiceProvider
Dim _Hash = Convert.ToBase64String(_sha.ComputeHash(New IO.FileStream(FilePath.FullName, IO.FileMode.Open, IO.FileAccess.Read)))
Return _Hash
End Function
As you can see I just takes a IO.Fileinfo as an argument, at the start of the function I am checking to make sure that it is not nothing.
I'm wondering is this good practice or should I just let it get to the actual hasher and then throw the exception because it is null.?
Thanks.
In general, I'd suggest it's good practice to validate all of the arguments to public functions/methods before using them, and fail early rather than after executing half of the function. In this case, you're right to throw the exception.
Depending on what your method is doing, failing early could be important. If your method was altering instance data on your class, you don't want it to alter half of the data, then encounter the null and throw an exception, as your object's data might them be in an intermediate and possibly invalid state.
If you're using an OO language then I'd suggest it's essential to validate the arguments to public methods, but less important with private and protected methods. My rationale here is that you don't know what the inputs to a public method will be - any other code could create an instance of your class and call it's public methods, and pass in unexpected/invalid data. Private methods, however, are called from inside the class, and the class should already have validated any data passing around internally.
One of my favourite techniques in C++ was to DEBUG_ASSERT on NULL pointers. This was drilled into me by senior programmers (along with const correctness) and is one of the things I was most strict on during code reviews. We never dereferenced a pointer without first asserting it wasn't null.
A debug assert is only active for debug targets (it gets stripped in release) so you don't have the extra overhead in production to test for thousands of if's. Generally it would either throw an exception or trigger a hardware breakpoint. We even had systems that would throw up a debug console with the file/line info and an option to ignore the assert (once or indefinitely for the session). That was such a great debug and QA tool (we'd get screenshots with the assert on the testers screen and information on whether the program continued if ignored).
I suggest asserting all invariants in your code including unexpected nulls. If performance of the if's becomes a concern find a way to conditionally compile and keep them active in debug targets. Like source control, this is a technique that has saved my ass more often than it has caused me grief (the most important litmus test of any development technique).
Yes, it's good practice to validate all arguments at the beginning of a method and throw appropriate exceptions like ArgumentException, ArgumentNullException, or ArgumentOutOfRangeException.
If the method is private such that only you the programmer could pass invalid arguments, then you may choose to assert each argument is valid (Debug.Assert) instead of throw.
If NULL is an inacceptable input, throw an exception. By yourself, like you did in your sample, so that the message is helpful.
Another method of handling NULL inputs is just to respont with a NULL in turn. Depends on the type of function -- in the example above I would keep the exception.
If its for an externally facing API then I would say you want to check every parameter as the input cannot be trusted.
However, if it is only going to be used internally then the input should be able to be trusted and you can save yourself a bunch of code that's not adding value to the software.
You should check all arguments against the set of assumptions that you make in that function about their values.
As in your example, if a null argument to your function doesn't make any sense and you're assuming that anyone using your function will know this then being passed a null argument shows some sort of error and some sort of action taken (eg. throwing an exception). And if you use asserts (as James Fassett got in and said before me ;-) ) they cost you nothing in a release version. (they cost you almost nothing in a debug version either)
The same thing applies to any other assumption.
And it's going to be easier to trace the error if you generate it than if you leave it to some standard library routine to throw the exception. You will be able to provide much more useful contextual information.
It's outside the bounds of this question, but you do need to expose the assumptions that your function makes - for example, through the comment header to your function.
According to The Pragmatic Programmer by Andrew Hunt and David Thomas, it is the responsibility of the caller to make sure it gives valid input. So, you must now choose whether you consider a null input to be valid. Unless it makes specific sense to consider null to be a valid input (e.g. it is probably a good idea to consider null to be a legal input if you're testing for equality), I would consider it invalid. That way your program, when it hits incorrect input, will fail sooner. If your program is going to encounter an error condition, you want it to happen as soon as possible. In the event your function does inadvertently get passed a null, you should consider it to be a bug, and react accordingly (i.e. instead of throwing an exception, you should consider making use of an assertion that kills the program, until you are releasing the program).
Classic design by contract: If input is right, output will be right. If input is wrong, there is a bug. (if input is right but output is wrong, there is a bug. That's a gimme.)
I'll add a couple of elaborations (in bold) to the excellent design by contract advice offerred by Brian earlier...
The priniples of "design by contract" require that you define what is acceptable for the caller to pass in (the valid domain of input values) and then, for any valid input, what the method/provider will do.
For an internal method, you can define NULLs as outside the domain of valid input parameters. In this case, you would immediately assert that the input parameter value is NOT NULL. The key insight in this contract specification is that any call passing in a NULL value IS A CALLER'S BUG and the error thrown by the assert statement is the proper behavior.
Now, while very well defined and parsimonius, if you're exposing the method to external/public callers, you should ask yourself, is that the contract I/we really want?
Probably not. In a public interface, you'd probably accept the NULL (as technically in the domain of inputs that the method accepts), but then decline to process gracefully w/ a return message. (More work to meet the naturally more complex customer-facing requirement.)
In either case, what you're after is a protocol that handles all of the cases from both the perspective of the caller and the provider, not lots of scattershot tests that can make it difficult to assess the completeness or lack of completeness of the contractual condition coverage.
Most of the time, letting it just throw the exception is pretty reasonable as long as you are sure the exception won't be ignored.
If you can add something to it, however, it doesn't hurt to wrap the exception with one that is more accurate and rethrow it. Decoding "NullPointerException" is going to take a bit longer than "IllegalArgumentException("FilePath MUST be supplied")" (Or whatever).
Lately I've been working on a platform where you have to run an obfuscator before you test. Every stack trace looks like monkeys typing random crap, so I got in the habit of checking my arguments all the time.
I'd love to see a "nullable" or "nonull" modifier on variables and arguments so the compiler can check for you.
If you're writing a public API, do your caller the favor of helping them find their bugs quickly, and check for valid inputs.
If you're writing an API where the caller might untrusted (or the caller of the caller), checked for valid inputs, because it's good security.
If your APIs are only reachable by trusted callers, like "internal" in C#, then don't feel like you have to write all that extra code. It won't be useful to anyone.