Win32 Unicode normalization pre-Vista - winapi

I'm looking into the various normalization methods in regards to a case insensitive comparison of strings.
The most historically compatible method I've come up with is to call FoldString() with MAP_PRECOMPOSED to precompose each string and then call CharUpper() and then CharLower() on it to complete the normalization. This seems to be how Java does it (minus the precomposition).
There are libraries such as ICU but I would like to keep my Unicode handling as lightweight as possible by relying on the APIs provided by the operating system instead of adding a large dependency.
I am aware that this uses the current system locale by default. I believe I read somewhere that Java uses en-US by default when performing String.equalsIgnoreCase().
Thoughts?

Related

Why does the Tool Help Library offer 2 versions of same functions/structures?

I've noticed that the Tool Help Library offers some functions and structures with 2 versions: normal and ending with W. For example: Process32First and Process32FirstW. Since their documentation is identical, I wonder what are the differences between those two?
The W and A versions stand for "wide" and "ANSI". In the past they made different functions, structures and types for both ANSI and unicode strings. For the purpose of this answer, unicode is widechar which is 2 bytes per character and ANSI is 1 byte per character (but it's actually more complicated than that). By supplying both types, the developer can use whichever he wants but the standard today is to use unicode.
If you look at the ToolHelp32 header file it does include both A and W versions of the structures and functions. If you're not finding them, you're not looking hard enough, do an explicit search for the identifiers and you will find them. If you're just doing "view definition" you will find the #ifdef macros. If you still can't find them, change your character set in your Visual Studio project and check again.
Due to wide char arrays being twice the size, structure alignment will be incorrect if you do not use the correct types. Let the macros resolve them for you, by setting the correct character set and using PROCESSENTRY32 instead of indicating A or W, this is the preferred method. Some APIs you are better off using the ANSI version to be honest but that is something you will learn with experience and have to make your own decision.
Here is an excellent article on the topic of character sets / encoding

How to customize a serialization

I'm newbie with graphql and spqr. I would like to serialize my dates with personal format. How I can do it?
The best answer I'd offer is: don't! SPQR serializes all temporal scalars as ISO 8601 strings in UTC zone for a reason. It is the most portable format, that any client can easily parse and understand, and any conversion and display logic is better left to the client itself.
If this is for some reason impossible (e.g. backwards compatibility with a legacy client), your best bet is providing your own scalar implementations. In the future there might be a feature to avoid this, but currently you have to implement your own scalars and a TypeMapper that will map the desired Java types to those scalars. See the existing ScalarMapper for inspiration. Once you have the mapper, register it via generator.withTypeMappers.

Decoding Protobuf encoded data using non-supported platform

I am new to Protobufs; I haven't had much exposure to them. One of the API endpoints we require data from, uses Protobuf encoded data. This generally wouldn't be an issue if I was using a 'supported' language such as JavaScript, Java, Python or even R to decode the data...
Unfortunately, I am trying to automate the process using Alteryx. Rather than this being an Alteryx specific question, I have a few questions about Protobufs themselves so I understand this situation better. I've read through the implementation of Protobufs in Java and Python, and have a basic understanding of how to use them.
To surmise (please correct me if I am wrong), a Protobuf is a method of serializing structured data where a .proto schema is used to encode / decode data into raw binary. My confusion lies with the compiler. Google documentation and examples for Python / Java show how a Protobuf compiler (library) is required in order to run the encoding and decoding process. Reading the Google website, it advises that the Protobufs are 'language neutral and platform neutral', but I can't see how that is possible if you need the compiler (and .proto file!) to do the decoding. For example, how would anyone using a language outside of the languages where Google have a compiler created possibly decode Protobuf encoded data? Am I missing something?
I figure I'm missing something, since it seems weird that a public API would force this constraint.
"language/platform neutral" here simply means that you can reliably get the same data back from any language/framework/platform. The serialization format is defined independently and does not rely on the nuances of any particular framework.
This might seem a low bar, but you'd be surprised how many serialization formats fail to clear it.
Because the format is specified, anyone can create a tool for some other platform. It is a little fiddly if you're not used to dealing in bits, but: totally doable. The protobuf landscape is not dependent on Google - here's a list of some of the known non-Google tools: https://github.com/protocolbuffers/protobuf/blob/master/docs/third_party.md
Also, note that technically you don't even need a .proto; you just need some mechanism for specifying which fields map to which field numbers (since protobuf doesn't include the names). Quite a few in that list can work either from a .proto, or from the field/number map being specified in some other way. The advantage of .proto is simply that it is easy to convey as the schema - and again: isn't tied to any particular language. You can write plugins for "protoc" to add your own tooling, so you don't need to write your own parser from scratch. Or you can write your own parser from scratch if you prefer.
You can't speak of non-supported platform in this case: it is more about languages for which you can't find a protobuf implementation.
My 2 cents is: if you can't find a protobuf implementation for your language, find another language you're familiar with (and popular in protobuf community) and handle the protobuf serialization/deserialization with it. Then call it via a REST API, a executable ... whatever

what is the winapi function declaration conventions?

what is the diffrence between these six functions?
LoadLibrary
LoadLibraryA
LoadLibraryEx
LoadLibraryExA
LoadLibraryExW
LoadLibraryW
what is the meaning of each suffix in the winapi and what is the difference between all of those functions?
LoadLibrary and LoadLibraryEx are macros which are defined depending on whether your project is compiled with unicode support. If so, they point to LoadLibraryW and LoadLibraryExW, otherwise they point to LoadLibraryA and LoadLibraryExA.
Typically, you are expected to write code using versions without A or W in the end and let compiler definitions make all the magic for you.
The Ex suffix is a standard way of denoting an "EXtended" function: one that is similar to the regular version, but provides additional functionality. Generally, they were added in a newer version of Windows and may not always be available (although most of them are so old now that they were added back in Windows 3.1 or 95).
The exact difference between functions, as mentioned before, should always be checked on MSDN.
A means ANSI; W means Wide (Unicode).
The A versions do not support Unicode strings; they're relics from Win9X.
The suffix-less version will expand to the A or W versions at compile-time, depending on whether the symbol UNICODE is defined.
The Ex versions are newer versions of the API method with additional functionality; consult the documentation for more details.
A - ansi
W - unicode
Ex - extended version of same function, for example some additional parameters

Why do people say that Java can't have an expression evaluator?

I am aware that by default Java does not have the so-called eval (what I pronounce as "evil") method. This sounds like a bad thing—knowing you do not have something which so many others do. But even worse seems being notified that you can't have it.
My question is: What is solid reasoning behind it? I mean, Google'ing this just returns a massive amount of old data and bogus reasons—even if there is an answer that I'm looking for, I can't filter it from people who are just throwing generic tag-words around.
I'm not interested in answers that are telling me how to get around that; I can do that myself:
Using Bean Scripting Framework (BSF)
File sample.py (in py folder) contents:
def factorial(n):
return reduce(lambda x, y:x * y, range(1, n + 1))
And Java code:
ScriptEngine engine = new ScriptEngineManager().getEngineByName("jython");
engine.eval(new FileReader("py" + java.io.File.separator + "sample.py"));
System.out.println(engine.eval("factorial(932)"));
Using designed bridges like JLink
This is equivalent to:
String expr = "N[Integrate[E^(2 y^5)/(2 x^3), {x, 4, 7}, {y, 2, 3}]]";
System.out.println(MM.Eval(expr));
//Output: 1.5187560850359461*^206 + 4.2210685420287355*^190*I
Other methods
Using Dijkstras shunting-yard algorithm or alike and writing an expression evaluator from scratch.
Using complex regex and string manipulations with delegates and HashMultimaps.
Using Java Expressions Library
Using Java Expression Language
Using JRE compliant scripting language like BeanShell.
Using the Java Assembler and approach below or direct bytecode manipulation like Javaassist.
Using the Java Compiler API and reflections.
Using Runtime.getRuntime().exec as root
"eval" is only available in scripting languages, because it uses the same interpreter that runs the rest of the code; in such languages the feature is free and well integrated, as in scripting environment it makes little difference if you run a string or a "real" function.
In copiled languages, adding "eval" would mean bundling the whole compiler - which would defy the purpose of compiling. No compiled language I know (even dynamic ones, like ActionScrip3) has eval.
Incidentally, the easiest way to eval in Java is the one you forgot to mention: JRE 1.6 comes with Javascript engine, so you can eval any Javascript in two lines of code. You could even argue that the presuposition of your question is false. Java 1.6 bundles a very advanced expression evaluator.
As Daniel points out there is at least one limitation that eval-solutions face in java. The php eval for example executes the code as if it was part of the surrounding method with complete access to local variables, this is not possible to do in standard java. Without this feature eval alternatives require a lot more work and verbosity, which makes them a lot less attractive for "quick" and "easy" solutions.
eval() is mostly part of interpreted languages where the names of local variables and code structure(scopes) are available at runtime, making it possible to "insert" new code. Java bytecode no longer contains this information leaving eval() alternatives unable to map access to local variables. (Note: I ignore debug information as no program should rely on it and it may not be present)
An example
int i = 0;
eval("i = 1");
System.out.println(i);
required pseudocode for java
context.put("i",new Integer(0));
eval(context,"i = 1");
System.out.println(context.get("i"));
This looks nice for one variable used in the eval, try it for 10 in a longer method and you get 20 additional lines for variable access and the one or other runtime error if you forget one.
Because evaluation of arbitrary Java expressions depends on the context of it, of variable scopes etc.
If you need some kind of variable expression, just use the scripting framework, and badamm! you have lots of different kinds of expression evaluation. Just take one kind like JavaScript as a default, and there is your eval()!
Enterprisy as Java is, you are not constrained to one choice.
But even worse seems being notified that you can't have it.
I think you are misunderstanding what (most of) those articles are saying. Clearly, there are many ways to do expression evaluation in a Java application. They haven't always been available, but at least some of them have been around for a long time.
I think what people are trying to say is that expression evaluation is not available as native (i.e. as an intrinsic part of Java or the standard libraries) and is unlikely to be added for a number of good reasons. For example:
Native eval would have significant security issues if used in the wrong place. (And it does for other languages; e.g. you shouldn't use eval in Javascript to read JSON because it can be a route for injecting bad stuff into the user's browser.)
Native eval would have significant performance issues, compared with compiled Java code. We are talking of 100 to 10,000 times slower, depending on the implementation techniques and the amount of caching of "compiled" eval expressions.
Native eval would introduce a whole stack of reliability issues ... much as overuse / misuse of type casting and reflection to.
Native eval is "not Java". Java is designed to be a primarily static programming language.
and of course ...
There are other ways to do this, including all of the implementation approaches that you listed. The Java SE platform is not in the business of providing every possible library that anyone could possibly want. (JRE downloads are big enough already.)
For these reasons, and probably others as well, the Java language designers have decided not to support expression evaluation natively in Java SE. (Even so, some expression support has officially made it into Java EE; e.g. in the form of JSP Expression Language. The classes are in the javax.el package ... or javax.servlet.jsp.el for an older / deprecated version.)
I think you already put the solution to your answer - bundle the BeanShell jar with your application (or lobby for it to be included in the JRE sometime), and you have your Java expression evaluator. It will still need a Binding of the input variables, though.
(What I'm more curious about: How does sandboxing of such a script/expression work? I don't want my web users to execute dangerous code in my server.)

Resources