VB6.0 Automation error - vb6

I'm currently working on a VB6.0 application which is giving an automation error which isn't very consistent (Sometimes the code works then crashes after several successful iterations).
Dim example As String
...
On Error GoTo ERROR
example = UCase$(Replace(form.UniTextBox(1).Text, " ", ""))
ERROR:
debug.print("ERROR: " & Err.description)
This the section of code which I've identified causes the automation error. The root cause seems to lie when the computer is set up as Polish with Windows 7 running. When English locale is set there are no issues.
What is causing this issue?
Any advice or tips would be appreciated.
thanks

Controls are ANSI not unicode. COM is unicode not ANSI. That string is being converted back and forth by Windows and VB.
Windows, which controls are, are either ANSI or Unicode. VB6 was written when most computers only had ANSI windows, hence all API calls (which creating a window require) are ANSI calls. Send unicode to an ANSI window and Windows will convert it to ANSI first. Ask VB to do API calls or Forms and it will convert the unicode string to ANSI.
See StrConv, byte arrays can act as unicode strings, and also see system settings in Regional Options for non unicode programs.

Related

Viewing Japanese MBCS text while remote debugging from English Windows machine?

Trying to debug a MBCS application that has had the strings, dialogs, etc. localized for Japanese. Seems to be a bug somewhere with a string getting truncated or something.
I am debugging from an English Windows 7 using Visual Studio 2013. Of course, since it is MBCS and not Unicode, when I view the strings, it is just gibberish. Probably, if it was unicode, then the strings would display in Japanese while remote debugging, but it's not, and it is not really an option.
So, is there any way to use some special encoding trick to view the string as Japanese on my English system. I'm not going to set my local system for remote debugging to Japanese either.
So... basically looking for some kind of option to view the Japanese strings from the remote system as Japanese strings on my English system. Anybody else been down this road?

How to return System Locale rather than User Locale?

I'm running a VB6.0 application and testing whether changing locale settings causes errors. The application works fine when both User Locale and system locale are set to the same country. However if the User Locale is different from the system locale then I have problems.
Why is GetThreadLocale not returning the system locale?
It appears to return the User Locale.
E.g.
System locale is set to Polish and User Locale (UK), GetThreadLocale returns 2057. Why is 1045 not being returned?
Any tips or advice would be appreciated.
VB6 is unicode internally and with COM. API calls, which includes any windows created by forms, is ANSI.
VB6's help has a big chapter on this topic.
For ANSI applications you set the non unicode setting in Regional Options.
Windows created with CreateWindowExA have all unicode strings sent to them converted to ANSI and vice versa for CreateWindowsExW (W means Wide, ie two bytes per character).
This is because Windows 95 didn't support unicode.

Prevent Windows changing ANSI characters

I'm having this problem with (I assume) Windows.
I need to convert an existing application written in Delphi 7 in a multilingual application.
I'm using the Delphi ITE and everything works well, except, when the file is saved and recompiled, special characters like ë is converted to e.
I thought that this is a Delphi issue but then realized that, even if I create text document in notepad, insert the character ë, save the document in ANSI format, and then open it again, the character is converted to e.
Is there any workaround to this, except for upgrading to Delphi 2009 and using unicode components?
How can I make Windows keep the original character?
Obviously this is not a coding issue, so no relevant code could be posted.
Thanks

TextPad and Unicode: full support?

I've got some UTF-8 files created in Mac, and when trying to open them using TextPad in Windows, I get the following warning:
WARNING: (file name) contains characters that do not exist in code
page 1252 (ANSI Latin 1). They will be converted to the system default
character, if you click OK.
Linux (GNOME gEdit) can open the same file without complaints. What does the above mean? I thought that TextPad had full UTF-8 support. Can I safely open and edit UTF-8 files using it without corrupting the file?
It seems that TextPad cannot handle characters outside windows-1252 (CP1252, here carrying the misnomer “ANSI Latin 1”). I tested it on Windows, opening a plain text file created on the same system, as UTF-8 encoded, both with and without BOM, with the same result. The program’s help does not seem to contain anything related to character encodings, and its tools for writing “international characters” are for Latin-1 characters only.
There are several text editors for Windows that can deal with UTF-8 (even Notepad can open a UTF-8 file, but it can hardly be recommended for serious editing). See Alan Wood’s collection of information on Unicode editors and word processors for Windows. (Personally, I like Notepad++ and BabelPad, which are both free.)
TextPad 8, the newest as of 2016-01-28, does finally properly support BMP Unicode. It's a paid upgrade, but so far has been working flawlessly for me.
TextPad ‘supports’ UTF-8 and UTF-16 documents only in as much as it will import and export them. But it still edits files as simple bytes, and not Unicode characters (using the ANSI code page, which is code page 1252 for Western European).
So unless the file happened to contain only characters that also exist in that code page, you will lose content. This rather defeats the point of Unicode.
Indeed, this was the issue that made me flee—to EmEditor, at the time, though now I would agree with the previous comments and recommend Notepad++. The era of paying for text editors is long gone.
Actually TextPad does support displaying Unicode code points granted they went about it the wrong way. In order to display the Unicode characters you have to choose Configure->Preferences and expand "Document Classes->Text->Font.
You need to choose a Unicode font AND set the Script to match. E.g. Arial Unicode MS with script CHINESE_BIG5.
However, this is a backward approach since the application should handle this when the user tells TextPad to open the file in Unicode or UTF-8. The built in Notepad application with MS Windows will detect the encoding automatically and display the glyphs correctly based upon the encoding.
I found a discussion on this in the Textpad forums:
http://forums.textpad.com/viewtopic.php?t=11019
While I have Notepad++, Textpad handles large files with ease while other editors I've tried, including Notepad++, either slow to a crawl or die. I'm currently trying to edit a 475MB file and Notepad++ is not up to the task.
Textpad Configure Menu --> Preferences --> Document Classes --> Default --> Default encoding --> UTF-8
Try the ANSI code set with File/Open, that should solve the problem in TextPad

Cygwin displays error messages in Hebrew and garbled

I have been using Cygwin to build my Android library using the NDK's ndk-build script and Cygwin's make tool. It started giving me errors with a bunch of Latin non-English characters. When copying the text to Google, it was pasted as Hebrew (which I can read). Is there any way to force it to output errors in English? Any idea why this happens?
Check your environment variables for the correct locale. LANG or LC_MESSAGES are probably responsible. Set those to an English locale (in your profile to have that in future sessions as well) to get English error messages. Sorry, I'm a Windows person and know nearly nothing of Unix so you'd have to look up the specifics elsewhere, but this should be the general direction to go.
Some programs/libraries try to be overly smart by guessing the locale from the keyboard layout or the user's locale. And oftentimes ignoring the fact that on Windows locale and UI language are two different concepts (and that different languages on the console are even harder to get right).
As for why the messages appear garbled that's likely because the console window uses the wrong code page. The easiest fix is usually to use a TrueType font for the console window, but in this case neither Consolas nor Lucida Console include glyphs for Hebrew, so you'd only see boxes anyway.

Resources