What is "ANSI encoding" in NSIS? - windows

I have an application that is installed using NSIS 3.x. The installer stores a path in a file that will be later checked by the application. The path is written using the FileWrite function; the documentation for FileWrite states the following:
Writes an ANSI string to a file opened with FileOpen. If an error occurs writing, the error flag will be set.
(If you are building a Unicode installer, the function makes the adequate conversion and writes an ANSI string)
I know that ANSI is a loosely defined term (see this question, and its many answers). Is there a way to find out what is the actual encoding used by this NSIS function, so that I can later read the generated file properly from the application?

It is the Windows meaning of ANSI; it is the currently active/default Windows codepage.
Before NSIS v3, NSIS was always "ANSI". It called the "A" versions of all API functions. Internally every string was just treated as bytes and no conversions were done. This is why FileWrite effectively wrote ANSI strings.
Unicode NSIS installers call WideCharToMultiByte(CP_ACP, ...) to convert the Unicode string before calling FileWrite.
It should be considered safe and portable for ASCII but not other characters. Another machine might be configured with a different codepage as the default and not all Unicode characters can be converted correctly even on the same machine. The only exception would be Windows 10/11 configured to use UTF-8 as the default or if you opt-in in the manifest.
If the application you are installing is also Unicode enabled then you should be using FileWriteUTF16LE.
You can also write Unicode .ini files:
FileOpen $0 $INSTDIR\cfg.ini a
FileWriteUTF16LE /BOM $0 "" ; Make sure it has a BOM
FileClose $0
WriteIniStr $INSTDIR\cfg.ini Paths App $InstDir

Related

Using UTF-8 Encoding (CHCP 65001) in Command Prompt / Windows Powershell (Windows 10)

I've been forcing the usage of chcp 65001 in Command Prompt and Windows Powershell for some time now, but judging by Q&A posts on SO and several other communities it seems like a dangerous and inefficient solution. Does Microsoft provide an improved / complete alternative to chcp 65001 that can be saved permanently without manual alteration of the Registry? And if there isn't, is there a publicly announced timeline or agenda to support UTF-8 in the Windows CLI in the future?
Personally I've been using chcp 949 for Korean Character Support, but the weird display of the backslash \ and incorrect/incomprehensible displays in several applications (like Neovim), as well as characters that aren't Korean not being supported via 949 seems to become more of a problem lately.
Note:
This answer shows how to switch the character encoding in the Windows console to
(BOM-less) UTF-8 (code page 65001), so that shells such as cmd.exe and PowerShell properly encode and decode characters (text) when communicating with external (console) programs with full Unicode support, and in cmd.exe also for file I/O.[1]
If, by contrast, your concern is about the separate aspect of the limitations of Unicode character rendering in console windows, see the middle and bottom sections of this answer, where alternative console (terminal) applications are discussed too.
Does Microsoft provide an improved / complete alternative to chcp 65001 that can be saved permanently without manual alteration of the Registry?
As of (at least) Windows 10, version 1903, you have the option to set the system locale (language for non-Unicode programs) to UTF-8, but the feature is still in beta as of this writing.
To activate it:
Run intl.cpl (which opens the regional settings in Control Panel)
Follow the instructions in the screen shot below.
This sets both the system's active OEM and the ANSI code page to 65001, the UTF-8 code page, which therefore (a) makes all future console windows, which use the OEM code page, default to UTF-8 (as if chcp 65001 had been executed in a cmd.exe window) and (b) also makes legacy, non-Unicode GUI-subsystem applications, which (among others) use the ANSI code page, use UTF-8.
Caveats:
If you're using Windows PowerShell, this will also make Get-Content and Set-Content and other contexts where Windows PowerShell default so the system's active ANSI code page, notably reading source code from BOM-less files, default to UTF-8 (which PowerShell Core (v6+) always does). This means that, in the absence of an -Encoding argument, BOM-less files that are ANSI-encoded (which is historically common) will then be misread, and files created with Set-Content will be UTF-8 rather than ANSI-encoded.
[Fixed in PowerShell 7.1] Up to at least PowerShell 7.0, a bug in the underlying .NET version (.NET Core 3.1) causes follow-on bugs in PowerShell: a UTF-8 BOM is unexpectedly prepended to data sent to external processes via stdin (irrespective of what you set $OutputEncoding to), which notably breaks Start-Job - see this GitHub issue.
Not all fonts speak Unicode, so pick a TT (TrueType) font, but even they usually support only a subset of all characters, so you may have to experiment with specific fonts to see if all characters you care about are represented - see this answer for details, which also discusses alternative console (terminal) applications that have better Unicode rendering support.
As eryksun points out, legacy console applications that do not "speak" UTF-8 will be limited to ASCII-only input and will produce incorrect output when trying to output characters outside the (7-bit) ASCII range. (In the obsolescent Windows 7 and below, programs may even crash).
If running legacy console applications is important to you, see eryksun's recommendations in the comments.
However, for Windows PowerShell, that is not enough:
You must additionally set the $OutputEncoding preference variable to UTF-8 as well: $OutputEncoding = [System.Text.UTF8Encoding]::new()[2]; it's simplest to add that command to your $PROFILE (current user only) or $PROFILE.AllUsersCurrentHost (all users) file.
Fortunately, this is no longer necessary in PowerShell Core, which internally consistently defaults to BOM-less UTF-8.
If setting the system locale to UTF-8 is not an option in your environment, use startup commands instead:
Note: The caveat re legacy console applications mentioned above equally applies here. If running legacy console applications is important to you, see eryksun's recommendations in the comments.
For PowerShell (both editions), add the following line to your $PROFILE (current user only) or $PROFILE.AllUsersCurrentHost (all users) file, which is the equivalent of chcp 65001, supplemented with setting preference variable $OutputEncoding to instruct PowerShell to send data to external programs via the pipeline in UTF-8:
Note that running chcp 65001 from inside a PowerShell session is not effective, because .NET caches the console's output encoding on startup and is unaware of later changes made with chcp; additionally, as stated, Windows PowerShell requires $OutputEncoding to be set - see this answer for details.
$OutputEncoding = [console]::InputEncoding = [console]::OutputEncoding = New-Object System.Text.UTF8Encoding
For example, here's a quick-and-dirty approach to add this line to $PROFILE programmatically:
'$OutputEncoding = [console]::InputEncoding = [console]::OutputEncoding = New-Object System.Text.UTF8Encoding' + [Environment]::Newline + (Get-Content -Raw $PROFILE -ErrorAction SilentlyContinue) | Set-Content -Encoding utf8 $PROFILE
For cmd.exe, define an auto-run command via the registry, in value AutoRun of key HKEY_CURRENT_USER\Software\Microsoft\Command Processor (current user only) or HKEY_LOCAL_MACHINE\Software\Microsoft\Command Processor (all users):
For instance, you can use PowerShell to create this value for you:
# Auto-execute `chcp 65001` whenever the current user opens a `cmd.exe` console
# window (including when running a batch file):
Set-ItemProperty 'HKCU:\Software\Microsoft\Command Processor' AutoRun 'chcp 65001 >NUL'
Optional reading: Why the Windows PowerShell ISE is a poor choice:
While the ISE does have better Unicode rendering support than the console, it is generally a poor choice:
First and foremost, the ISE is obsolescent: it doesn't support PowerShell (Core) 7+, where all future development will go, and it isn't cross-platform, unlike the new premier IDE for both PowerShell editions, Visual Studio Code, which already speaks UTF-8 by default for PowerShell Core and can be configured to do so for Windows PowerShell.
The ISE is generally an environment for developing scripts, not for running them in production (if you're writing scripts (also) for others, you should assume that they'll be run in the console); notably, with respect to running code, the ISE's behavior is not the same as that of a regular console:
Poor support for running external programs, not only due to lack of supporting interactive ones (see next point), but also with respect to:
character encoding: the ISE mistakenly assumes that external programs use the ANSI code page by default, when in reality it is the OEM code page. E.g., by default this simple command, which tries to simply pass a string echoed from cmd.exe through, malfunctions (see below for a fix):
cmd /c echo hü | Write-Output
Inappropriate rendering of stderr output as PowerShell errors: see this answer.
The ISE dot-sources script-file invocations instead of running them in a child scope (the latter is what happens in a regular console window); that is, repeated invocations run in the very same scope. This can lead to subtle bugs, where definitions left behind by a previous run can affect subsequent ones.
As eryksun points out, the ISE doesn't support running interactive external console programs, namely those that require user input:
The problem is that it hides the console and redirects the process output (but not input) to a pipe. Most console applications switch to full buffering when a file is a pipe. Also, interactive applications require reading from stdin, which isn't possible from a hidden console window. (It can be unhidden via ShowWindow, but a separate window for input is clunky.)
If you're willing to live with that limitation, switching the active code page to 65001 (UTF-8) for proper communication with external programs requires an awkward workaround:
You must first force creation of the hidden console window by running any external program from the built-in console, e.g., chcp - you'll see a console window flash briefly.
Only then can you set [console]::OutputEncoding (and $OutputEncoding) to UTF-8, as shown above (if the hidden console hasn't been created yet, you'll get a handle is invalid error).
[1] In PowerShell, if you never call external programs, you needn't worry about the system locale (active code pages): PowerShell-native commands and .NET calls always communicate via UTF-16 strings (native .NET strings) and on file I/O apply default encodings that are independent of the system locale. Similarly, because the Unicode versions of the Windows API functions are used to print to and read from the console, non-ASCII characters always print correctly (within the rendering limitations of the console).
In cmd.exe, by contrast, the system locale matters for file I/O (with < and > redirections, but notably including what encoding to assume for batch-file source code), not just for communicating with external programs in-memory (such as when reading program output in a for /f loop).
[2] In PowerShell v4-, where the static ::new() method isn't available, use $OutputEncoding = (New-Object System.Text.UTF8Encoding).psobject.BaseObject. See GitHub issue #5763 for why the .psobject.BaseObject part is needed.
You can put the command chcp 65001 in your Powershell Profile, which will run it automatically when you open Powershell. However, this won't do anything for cmd.exe.
Microsoft is currently working on an improved terminal that will have full Unicode support. It is open source, and if you're using Windows 10 Version 1903 or later, you can already download a preview version.
Alternatively, you can use a third-party terminal emulator such as Terminus.
The Powershell ISE displays Korean perfectly fine. Here's a sample text file encoded in utf8 that would work:
PS C:\Users\js> cat .\korean.txt
The Korean language (South Korean: 한국어/韓國語 Hangugeo; North
Korean: 조선말/朝鮮말 Chosŏnmal) is an East Asian language
spoken by about 77 million people.[3]
Since the ISE comes with every version of Windows 10, I do not consider it obsolete. I disagree with whoever deleted my original answer.
The ISE has some limitations, but some scripting can be done with external commands:
echo 'list volume' | diskpart # as admin
cmd /c echo hi
EDIT:
If you have Windows 10 1903, you can download Windows Terminal from the Microsoft Store https://devblogs.microsoft.com/commandline/introducing-windows-terminal/, and Korean text would work in there. Powershell 5 would need the text format to be UTF8 with bom or UTF16.
EDIT2:
It seems like the ideals are windows terminal + powershell 7 or vscode + powershell 7, for both pasting characters and output.
EDIT3:
Even in the EDIT2 situations, some unicode characters cannot be pasted, like ⇆ (U+21C6), or unicode spaces. Only PS7 in Osx would work.

Delphi 2007 compilation with codepage parameter and teamcity

I'm trying to compile project wrote in Delphi 2007 with parameter --codepage:1252. At one machine with Windows 10 everything is ok and Spanish strings are displaying correct. When I'm doing the same on computer with Windows 8 parameter --codepage:1252 nothing change.
I need this because on computer where it's not working I have teamcity agent.
Has anyone similar problem? Or it is possible to achieve proper displaying Spanish characters on non-unicode application without changing it in windows property which required system restart and it's problematic for teamcity server.
EDIT:
On both computers I have set Polish language for non-unicode application and:
Windows 10 - compile with Spanish characters, copy exe file to Spanish PC and there is no Spanish characters.
Windows 10 - compile with Spanish characters and --codepage:1252 parameter set in dproj "1252", copy exe file to Spanish PC and Spanish characters are show correct.
Windows 10 - compile with Spanish characters and --codepage:1252 parameter in command line using dcc32.exe, copy exe file to Spanish PC and Spanish characters there is no Spanish characters.
Windows 8 - compile with Spanish characters, copy exe file to Spanish PC and there is no Spanish characters.
Windows 8 - compile with Spanish characters and --codepage:1252 parameter, copy exe file to Spanish PC and there is no Spanish characters.
What is the difference between compiling by Delphi and using dcc32.exe it should be the same output because Delphi use dcc32 in the same way, I can see it in output.
UPDATE:
After more test I have conclusion:
From Delphi IDE compiling with "--codepage:1252" and all files have ANSI format it's work. When I change to UTF-8 wont work.
From command line don't work in any cases and combination.
Delphi 2007 is the last version of Delphi that still uses an Ansi-based RTL/VCL. The native GUI components use the OS's default Ansi codepage. So it does not matter if your source code is encoded in Latin1/Spanish (which is what the --codepage:1252 parameter is telling the compiler) if your GUI cannot display Spanish data correctly on a non-Spanish machine.
As dummzeuch mentioned, your source code should be saved as UTF-8 instead of Latin1/Spanish. But if you want to display Unicode data at runtime from an Ansi-based executable, you will have to convert your data to Unicode and use 3rd party Unicode GUI components, such as the TNT Unicode controls. Also see Handling a Unicode String in Delphi Versions <= 2007.
Otherwise, bite the bullet and upgrade to Delphi 2009 or later, which use a native Unicode-based RTL/VCL. Delphi has been a fully Unicode product since 2008, it is time to ditch ANSI.
You could, on the computer where it works correctly, change the file format to UTF-8 and save it (I'm not sure whether it is possible to do that automatically for all files in a project. Then it should work on any computer, regardless of its codepage.
To change the file format, use the context menu of the editor window.

newlines when displayed in Windows

open OUT, ">output.txt";
print OUT "Hello\nWorld";
When I run the above perl code in a Unix system and then transfer output.txt to Windows and open it in Notepad it shows as:
HelloWorld
What do I need to do to get the newlines displaying properly in Windows?
Text file line endings are platform-specific. If you're creating a file intended for the Windows platform then you should use
open OUT, '>:crlf', 'output.txt' or die $!;
Then you can just
print OUT "Hello\nworld!\n";
as normal
The :crlf PerlIO layer is the default for Perl executables built for Windows, so you don't need to add it to code that will create files for its intended platform. For portable software you can check the host system by examining the built-in variable $^O
Windows uses carriage-return + linefeed:
print OUT "Hello\r\nWorld";
I wrote the File::Edit::Portable module that eliminates these problems. Although you can use it to write (along with many other things), you only need the read() functionality in this case.
Install the module, and at the top of your script, add:
use File::Edit::Portable;
When opening/reading the file, you can just do:
my $rw = File::Edit::Portable->new;
my $fh = $rw->read('file.txt');
No matter what the line endings are or what platform you're on, it does all of the cross-platform work in the background so you don't have to. That way, you can open the file on any system, regardless of what line endings you've decided to use, and it just works.
Newline handling is editor specific (there are a number of answers that claim it is OS specific, but that is, in real life, not true in general). However, it is true that on DOSish systems, longstanding convention is to use CRLF to indicate EOL (see also Why is fwrite writing more than I tell it to?)
If you try to open this file in any other editor than Notepad, you will notice that the text is properly displayed on two lines, with an indicator in a status bar or some other place that the file is opened in Unix mode or LF mode.
Unless you intend your file solely for viewing with Notepad, you don't have anything to worry about. Every other tool on Windows will deal fine with it.
However, Notepad does expect a CRLF sequence to mark the end of each line. If you do want to cater for it, then you can just output "\r\n" as #kizeloo suggests. I do prefer to use output layers when they are necessary.
Note that if you try to view such a file using an editor that requires a single LF to signify EOL, you may see ^Ms or other characters denoting the CR.

Unicode symbols in Git for Windows

Not entirely sure how to best explain, but would it be possible to use certain Unicode symbols, not supported by the Windows Command Prompt, in Git for Windows? Preferably without using Cygwin.
The git-bash.exe packaged with git-for-windows does support those unicode characters. (it uses minTTY)
But the cmd.exe console does not does (as pointed out by kostix in the comments) with a Unicode (TrueType) font set, although those ones (✔', '✖', '★') are not included in the TrueType fond like Lucida Consola.

Working with UTF-8 encoded TCL files on windows

We try to convert the source files of a TCL/TK application to UTF-8, because this is the default charset of the plattforms we use for development (Linux and OSX).
Our problem is now that windows uses "cp1252" as system encoding, and because of this displays labels and buttons with (for example) german umlauts wrong.
The only solution we found yet would be to add "-encoding UTF-8" to all "wish" calls and "source" commands.
(There is also "encoding system UTF-8", but the documentation says that you shouldn't use it because of problems with system calls)
Is there a way to tell TCL that it should use UTF-8 as default encoding for all source files, or maybe another solution for this problem?
The solution suggested in the TCL chat:
Create and use your own versions of "open" und "source" (like "my_open" and "my_source") which then call the original commands with "-encoding utf-8"

Resources