The docs for MAKELANGID specify that MAKELANGID(LANG_NEUTRAL, SUBLANG_NEUTRAL) Means 'Language neutral'.
This seems to be English on my machine (tried it with FormatMessage), but what does it mean in general? Is it guarenteed to be English?
Thanks!
I would expect that this means that the strings associated with the lang id are not specific to any language - which could be useful to know for a localisation team. "%1 + %2 = %3" would be an example of one such string.
with sublanguage = SUBLANG_DEFAULT this would be the user's default language.
https://web.archive.org/web/20100704043524/http://msdn.microsoft.com/en-us/library/ms534732(VS.85).aspx
Here's a note on the sublanguage identifier - https://web.archive.org/web/20100728153356/http://wiki.winehq.org/SublangNeutral.
Note that MAKELANGID creates a language identifier for you from the primary language and sublanguage identifier - it does "not" get the default language, or anything like that.
No, it is not "gauranteed to be English." It "is" whatever you place into it at that point (English, in your case). But it means that it should not serve as a (language) satellite assembly (except maybe as a fallback).
Related
I've recently encountered all sorts of wrappers in Google's protobuf package. I'm struggling to imagine the use case. Can anyone shed the light: what problem were these intended to solve?
Here's one of the documentation links: https://developers.google.com/protocol-buffers/docs/reference/csharp/class/google/protobuf/well-known-types/string-value (it says nothing about what can this be used for).
One thing that will be different in behavior between this, and simple string type is that this field will be written less efficiently (a couple extra bytes, plus a redundant memory allocation). For other wrappers, the story is even worse, since the repeated variants of those fields will be written inefficiently (official Google's Protobuf serializer doesn't support packed encoding for non-numeric types).
Neither seems to be desirable. So, what's this all about?
There's a few reasons, mostly to do with where these are used - see struct.proto.
StringValue can be null, string often can't be in a language interfacing with protobufs. e.g. in Go strings are always set; the "zero value" for a string is "", the empty string, so it's impossible to distinguish between "this value is intentionally set to empty string" and "there was no value present". StringValue can be null and so solves this problem. It's especially important when they're used in a StructValue, which may represent arbitrary JSON: to do so it needs to distinguish between a JSON key which was set to empty string (StringValue with an empty string) or a JSON key which wasn't set at all (null StringValue).
Also if you look at struct.proto, you'll see that these aren't fully fledged message types in the proto - they're all generated from message Value, which has a oneof kind { number_value, string_value, bool_value... etc. By using a oneof struct.proto can represent a variety of different values in one field. Again this makes sense considering what struct.proto is designed to handle - arbitrary JSON - you don't know what type of value a given JSON key has ahead of time.
In addition to George's answer, you can't use a Protobuf primitive as the parameter or return value of a gRPC procedure.
I have to create a list with all languages which should look like this:
Col 1 | Col 2
-----------------
English | English
German | Deutsch
French | Français
Spanish | Español
...
Col 1: Language in English
Col 2: Original Country Language
The list should cover all main languages (or in other words: all languages which you can translate in google translator)
Of course this takes quite a while.
Is it possible to generate this list with a script by using the Google API ?
Yes, google has a Translation API which is very similar to the Google Translator. There are a couple of endpoints which are of interest to you in this case.
There is a way to list all the available languages, which would populate your Col 1. By default, this returns all the language (and sometimes language-country) codes that are supported, but you can provide a target query parameter to also include the name of the language in a "target language". In your case, you would want to show it "en-US".
In theory, you could repeat this for every language code and then just use the result for the language's own language code to populate Col 2. (This may be the most accurate way, but you'll get back a lot of extra data you don't want.)
Of course, you can also just translate the text to get your Col 2 results.
I have some existing Visual C++ code where I need to add the conversion of wide character strings to upper or lower case.
I know there are pitfalls to this (such as the Turkish "I"), but most of these can be ironed-out if you know the language. Fortunately in this area of code I know the LCID value (locale ID) which I guess is the same as knowing the language.
As LCID is a Windows type, is there a Windows function that will convert wide strings to upper or lower case?
The C runtime function _towupper_l() sounds like it would be ideal but it takes a _locale_t parameter instead of LCID, so I guess it's unsuitable unless there is a completely reliable way of converting an LCID to a _locale_t.
The function you're searching for is called LCMapString and it is part of the Windows NLS APIs. The LCMAP_UPPERCASE flag maps characters to uppercase, while the LCMAP_LOWERCASE maps characters to lowercase.
For applications targeting Windows Vista and later, there is an Ex variant that works on locale names instead of identifiers, which are what Microsoft now says you should prefer to use.
In fact, in the CRT implementation provided with VS 2010 (and presumably other versions as well), functions such as _towupper_l ultimately end up calling LCMapString after they extract the locale ID (LCID) from the specified _locale_t.
If you're like me, and less familiar with the i8n APIs than you should be, you probably already know about the CharUpper, CharLower, CharUpperBuff, and CharLowerBuff family of functions. These have been the old standbys from the early days of Windows for altering the case of chars/strings, but as their documentation warns:
Note that CharXxx always maps uppercase I to lowercase I ("i"), even when the current language is Turkish or Azeri. If you need a function that is linguistically sensitive in this respect, call LCMapString.
What it neglects to mention is filled in by a couple of posts on Michael Kaplan's wonderful blog on internationalization issues: What does "linguistic casing" mean?, How best to alter case. The executive summary is that you achieve the same results as the CharXxx family of functions by calling LCMapString and not specifying the LCMAP_LINGUISTIC_CASING flag, whereas you can be linguistically sensitive by ensuring that you do specify the LCMAP_LINGUISTIC_CASING flag.
Sample code:
std::wstring test("Does my code pass the Turkey test?");
if (!LCMapStringW(lcid, /* your LCID, defined elsewhere */
LCMAP_UPPERCASE | LCMAP_LINGUISTIC_CASING,
test.c_str(), /* input string */
test.length(), /* length of input string */
&test[0], /* output buffer (can reuse input) */
test.length())) /* length of output buffer (same as input) */
{
// Uh-oh! Something went wrong in the call to LCMapString, so you need to
// handle the error somehow here.
// A good start is calling GetLastError to determine the error code.
}
i'm trying to convert numbers into localized strings.
For integers and money values it's pretty simple, since the string is just a series of digits and digit grouping separators. E.g.:
12 345 678 901 (Bulgarian)
12.345.678.901 (Catalan)
12,345,678,901 (English)
12,34,56,78,901 (Hindi)
12.345.678.901 (Frisian)
12?345?678?901 (Pashto)
12'345'678'901 (German)
i use the Windows GetNumberFormat function to format integers (and GetCurrencyFormat to format money values).
But some numbers cannot be reasonably represented in fixed notation, and require scientific notation:
6.0221417930×1023
or more specifically E notation:
6.0221417930E23
How can i get the localized version of scientific notation?
i suppose i could construct it using localized numbers:
6.0221417930E23
6,0221417930E23
6.0221417930e23
6·0221417930E23
6·0221417930e23
6,0221417930e23
6,,0221417930e23
6.0221417930E+23
6,0221417930E+23
6.0221417930e+23
6,0221417930e+23
6·0221417930E+23
6·0221417930e+23
6,,0221417930e+23
6.0221417930E23
6,0221417930E23
6.0221417930e23
6,0221417930e23
6·0221417930E23
6·0221417930e23
6,,0221417930e23
6.0221417930X10^23
6,0221417930X10^23
6.0221417930x10^23
6,0221417930x10^23
6·0221417930X10^23
6·0221417930x10^23
6,,0221417930x10^23
6.0221417930·10^23
6,0221417930·^23
6.0221417930.10^23
6,0221417930.10^23
6·0221417930·^23
6·0221417930.10^23
6,,0221417930.10^23
but i don't know if other cultures (cultures besides mine) use an E for exponentiation.
To the best of my knowledge, exponentiation notation is not part of Windows or .NET locale data. However, the Unicode CLDR can help once again: Its <numbers> sections contains what you are looking for:
/numbers/symbols/exponential says E or its equivalent in the given culture.
/numbers/scientificFormats/ shows the exponentiation pattern.
You'll need to download the zipped core CLDR data and extract the file for each culture you're interested in from the common/main directory.
If you want to be able to support all cultures, you'll have to gather the relevant info from all culture files and pack it into your own specific DB. Not quite a trivial work but it's possible.
I gave a quick look to the data in a few very different cultures such as en, fr, zh, ru, vi, ar: They all contain the same pattern: #E0. It looks like either the data is not accurate (I seriously doubt.) or you don't have to care really: Everybody does it the same way and you shouldn't actually care.
For Polish it should be 6,0221417930·1023.
I don't think CLDR mentioned by Serge (great answer BTW) is valid here. However, it is still the best source of information. Otherwise you would need to ask your translators to translate the pattern for you (which would require a comment with good explanation what you are up to).
How can I set the dault language of an WebOs project?
The standard way of adding internationalization in WebOS is to use the $L() function, where I can set a key to the translated string. But if the current language ist not specified in the project WebOS displays the key to the user. How can I stopp this behaviour and set a default language, that will be taken instead of the key.
PS: I think the Palm way of taking a real world sentences is not a good way of programming.
Bad example: $L("This should be not a real world sentence!!")
Better example: $L("key.subKey")
You can use a key-value pair to solve this problem (from the Palm documentation):
If the original string is not appropriate as a key, the $L() function can be called with an explicit key:
$L("value":"Done", "key": "done_key");
At run-time, the result of the call to $L() is the translation of the string passed as value. The translations "live" in the /resources/locale/strings.json file.
Example:
content of file app_name/resources/es_us/strings.json:
{
"My text here": "Mi texto aquí",
"done_key": "Listo",
"Some other string": "Some other string's translation"
}