Getting readable diff displays in Mercurial on Unicode files (MS Windows) - windows

I'm trying to store some Windows PowerShell scripts in a Mercurial repository. It seems the PowerShell editor likes to save files as UTF-16 Unicode. This means that there are lots of \0 bytes, which is what Mercurial uses to distinguish between "text" and "binary" files. I understand that this makes no difference to how Mercurial stores the data, but it does mean that it displays binary diffs, which are kind of hard to read. Is there a way to tell Mercurial that these really are text files? Presumably I would need to convince Mercurial to use an external Unicode-aware diff program for particular file types.

This may not be relevant to you; read the last paragraph if it doesn't sound like it is.
I'm not sure whether this is what you're needing, but I've needed diffs with UTF-16LE content more than just the "binary files are different" - when I searched around some months ago for it I found a thread and bug discussing it; here's part of it. I can't find the original source of this mini-extension now (though it's doing just what that patch does), but what I got was an extension, BOM.py:
#!/usr/bin/env python
from mercurial import hg, util
import codecs
boms = [
codecs.BOM_UTF8,
codecs.BOM_UTF16_BE, codecs.BOM_UTF16_LE,
codecs.BOM_UTF32_BE, codecs.BOM_UTF32_LE
]
def binary(s):
if s:
for bom in boms:
if s.startswith(bom):
return False
return '\0' in s
return False
def reposetup(ui, repo):
util.binary = binary
This gets loaded in the .hgrc (or your users\username\mercurial.ini) like this:
[extensions]
bom = ~/.hgexts/BOM.py
Note the path will vary between Windows and Linux; on my Windows copy I put the path as \...\whatever (it's on a USB disk where the drive letter can change). Unfortunately relative paths are taken relative to the current working directory rather than the repository root or any such thing, but if you are saving it on your C: drive, you can just put the full path.
In Linux (my main development environment), this works well; in Command Prompt (which I still use regularly), it generally works well. I've never tried it in PowerShell, but I would expect it to be better than Command Prompt in its support for arbitrary null bytes in the command line.
I'm not sure if this is what you want at all; by the way you've said "binary diffs" I suspect you may already either have this or be doing hg diff -a which is achieving the same thing. In that case, all I can think of is writing another extension which takes the UTF-16LE and attempts to decode it to UTF-8. I'm not sure of the syntax for such an extension, but I might try that out.
Edit: having now trawled the mercurial source through commands.py, cmdutil.py, patch.py and mdiff.py, I see that binary diffs are done with a base85 encoding (patch.b85diff) rather than the normal diff. I wasn't aware of that, I thought it just forced it to diff it. In that case, perhaps this text is relevant after all. I await a response to see if it is!

I have worked around this by creating a new file with NotePad++ and saving it as a PowerShell file (.ps1 extension). NotePad++ will create the file as a plain text ANSI file. Once created I can open the file in the PowerShell editor and make any changes as necessary without the editor modifying the file encoding.
Disclaimer: I encountered this just moments ago and so I am not sure if there are any repercussions but so far my scripts appear to work as normal and my diffs are showing up nicely.

If my other answer does not do what you want, I think this one may; although I haven't tested it on Windows at all yet, it's working well in Linux. It does what is potentially a nasty thing, in wrapping mercurial.mdiff.unidiff with a new function which converts utf-16le to utf-8. This will not affect hg st, but will affect hg diff. One potential pitfall is that the BOM will also be changed from UTF-16LE BOM to the UTF-8 BOM.
Anyway, I think it may be useful to you, so here it is.
Extension file utf16decodediff.py:
import codecs
from mercurial import mdiff
unidiff = mdiff.unidiff
def new_unidiff(a, ad, b, bd, fn1, fn2, r=None, opts=mdiff.defaultopts):
"""
A simple wrapper around mercurial.mdiff.unidiff which first decodes
UTF-16LE text.
"""
if a.startswith(codecs.BOM_UTF16_LE):
try:
# Gets reencoded as utf-8 to be a str rather than a unicode; some
# extensions may expect a str and may break if it's wrong.
a = a.decode('utf-16le').encode('utf-8')
except UnicodeDecodeError:
pass
if b.startswith(codecs.BOM_UTF16_LE):
try:
b = b.decode('utf-16le').encode('utf-8')
except UnicodeDecodeError:
pass
return unidiff(a, ad, b, bd, fn1, fn2, r, opts)
mdiff.unidiff = new_unidiff
In .hgrc:
[extensions]
utf16decodediff = ~/.hgexts/utf16decodediff.py
(Or equivalent paths.)

Related

RStudio: keeping special characters in a script

I wrote a script with German special characters e.g. ü.
However, whenever I close R and reopen the script the characters are substituted:
Before "für"; "hinzufügen"; "Ø" - After "für"; "hinzufügen"; "Ã".
I tried to remedy it using save with encoding and choosing UTF-8 as it is stated here but it did not work.
What am I missing?
You don't say what OS you're using, but this kind of thing really only happens on Windows nowadays, so I'll assume that.
The problem is that Windows has a local encoding that is not UTF-8. It is commonly something like Latin1 in English-speaking countries. I'm not sure what encoding people use in German-speaking countries, if that's where you are. From the junk you saw, it looks as though you saved the file in UTF-8, then read it using your local encoding. The encodings for writing and reading have to match if you want things to work.
In RStudio you can try "Reopen with encoding..." and specify UTF-8, and you'll probably get your original back, as long as you haven't saved it after the bad read. If you did that, you've got a much harder cleanup to do.

Determining encoding for a file in Ruby

I have come up with a method to determine encoding (or at least a guess at it) for a file that I pass in:
def encoding_type(file_path)
File.read(file_path).encoding.name
end
The problem with this is that I have a file that is 15GB, so that means the entire file is being read into memory.
Is there anyway to accomplish what I am doing in this method without needing to read the entire file into memory?
The file -mime command will return the mime type and encoding of the file:
file -mime myfile
myfile: text/plain; charset=iso-8859-1
def detect_charset(file_path)
`file --mime #{file_path}`.strip.split('charset=').last
rescue => e
Rails.logger.warn "Unable to determine charset of #{file_path}"
Rails.logger.warn "Error: #{e.message}"
end
The method you suggest in your question will not do what you think. It will simply set the file to the Encoding.default_internal encoding, possibly after transcoding it from Encoding.default_external. These are both usually UTF-8. The encoding is going to always be Encoding.default_internal after you run that code, it is not guessing or determining the encoding from the actual file.
If you have a file and you really don't know what encoding it is, you indeed will have to guess. There's no way to be 100% sure you've gotten it right as the author intended (and some files are corrupt and mixed encoding or not legal in any encoding).
There are libraries with heuristics meant to try and guess (they won't be right all the time).
Here's one, which I've never actually used myself, but the likelyist prospect I found in 10 minutes of googling: https://github.com/oleander/rchardet There might be other ruby gems for this. You could also use ruby system() to call a linux command line utility that tries to do this as well, someone above mentions the Linux file command.
If you don't want to load the entire file in to test it, you can certainly just load part of it in. Probably the chardet library will work more reliably the more it's got, but, sure, just read the first X bytes of the file in and then ask chardet to guess it's encoding.
require 'chardet19'
first1000bytes = File.read(file, 1000)
cd = CharDet.detect(first1000bytes)
cd.encoding
cd.confidence
You can also always check to see if any string in ruby is valid for the encoding it's set at:
str.valid_encoding?
So you could simply go through a variety of encodings and see if it's valid:
orig_encoding = str.encoding
str.force_encoding("ISO-8859-1").valid_encoding?
str.force_encoding("UTF-8").valid_encoding?
str.force_enocding(orig_encoding) # put it back to what it was
But it's certainly possible for a file to be valid in more than one encoding, or to be valid in a given encoding but read as nonsense by humans in that encoding.
If you have your best guess encoding, but it's still not valid_encoding? for that encoding, it may just have a few bad bytes in it. You can remove them with String.scrub in ruby 2.1, or with this pure-ruby backport of String.scrub in other ruby versions.
Hope this helps give you some idea of what you're dealing with and what your options are.

WinMerge: How to compare files with the same content but different encodings?

Motivation: I am rewriting a doc -- text files to be processed later. The new sources now use UTF-8. Large portions of the sources are the same. I need to find differences.
Details: The old doc sources use the cp1250 encoding, the new sources use the UTF-8. Both new and old sources use the same line endings (CR+LF). I am using the Unicode version of the WinMerge application (WinMergeU.exe), version 2.12.4.0.
It almost works, but... When the lines differ, they are initially marked as block by the dark yellow, and the different portions are marked using the lighter colour. When moving the red block cursor there, the panes below show the different part.
However, the block of text is marked by the dark yellow also in cases when (the Unicode representation of) the text is the same. The red block moves also to those portions of the files. In such case, the two panes below (that show the differences) containt the same text and nothing is marked as different. See the picture below:
The very first line differs -- this is OK. But the second line has visually the same content. The only character outside of the ASCII range is Ú there. It has a different representation in the encoded sources. This causes the line marked as different, but the panes below does not mark anyting at the line as different.
See also the following paragraphs that are exactly the same (only the encoding in the sources differ, the same line ending is used).
It looks as if the initial comparison were based on binary representation of the lines. Is there any setting to tell WinMerge that the comparison (I mean the block marking) should be based on Unicode content?
I tried hard, but no luck, yet.
Update: The above question was for the latest stable 2.12.4. The beta version 2.13.22 works just perfectly for me. See my answer below.
This doesn't really answer your question about WinMerge, but have you considered using another diff program? One of my favorites is kdiff - http://kdiff3.sourceforge.net/
When I do a compare on KDiff using one UTF8 file and another Unicode file, I get the following:
Here is the compare screen - note that the encodings on the files are different, but the files are considered to be equal from a text standpoint:
I think it really should not be the task of a merge tool to allow the merging of files stored in different encodings.
An encoding is a function that maps bytes (stored on the disk or in memory) to characters (displayed on screen). Unfortunately, by default the encoding of a file is not stored together with the file. Therefore, any program that wants to open the file and display its contents needs to guess the encoding. While this sometimes works, it is also an error prone procedure.
Now, the character sets of different encodings do not overlap in general. So what is the merge tool supposed to do if you merge a character C from file A in encoding X into a file B in encoding Y, if character C is not part of the character set of encoding Y?
Thus, I think the task of a merge tool should be to merge the binary content. Anything else is a dirty hack and damned to fail at some level. (A merge tool maker may decide to provide character level merging, which also might work most of the time. But there is some guesswork involved.)
Therefore, I'd also recommend you first translate the old files to UTF-8 and then merge those with the new versions.
Just for your information. The question was for the latest stable 2.12.4. I have tried the beta version 2.13.22, and it works just perfectly for me. See the difference for exactly the same files -- only the first lines in the files were removed. (My big thanks to authors.)
Edit -> Options
Select 'Compare' from categories pane on left.
Check box 'Ignore carriage return differences' (UNIX, Windows, Mac)
I would recommend converting the files to the same encoding before diffing.
If you are working with a version control system I'd recommend the following:
Create a fresh checkout of the files
Convert all files to UTF-8
Commit the files
Copy your new files over
Use WinMerge
That way you end up with two commits in the history - one for the encoding change and another for the content changes and WinMerge will work as expected.
What about option File -> File Encoding... in WinMerge? It allows to set encoding for files independently.

Executable Files - how to identify them in ASCII

It looks like all EXE files begin with MZ when they are opened in ASCII mode, is there an ASCII identified for vbs, com and bat files as well? i can't seem to find a pattern...
Or maybe there's another way to identify them? aside from just the extension...
No, not really (Windows executables can have PE or PK at the beginning instead of MZ - see this for other possible formats).
For other types of files, there are certain heuristics you can use (e.g. GIF files start with "GIF89", Bash shell scripts usually start with #!/bin/bash, BAT files often execute #echo off at the beginning, VBS scripts use apostrophe at the start of line as a comment marker), but they aren't always 100% reliable (a file can be both a BAT script and a Bash shell script; or a file that's both a valid ZIP archive and a valid GIF image (like that stegosaurus image), for example).
See e.g. this article for further reading.
TrID seems to have a "standalone" application you could probably use and pass the file in and read the contents out and see what file it is. It prides itself on the ability to pass it a generic file (extension or without) and it uses the headers of the file to discover what file type it actually is.
See if this tutorial is helpful (How to detect the types of executable files 3 part series). He has even presented a step by step algorithm on how to do this.
Also see this post: How to determine if a file is executable?

Are there any invalid linux filenames?

If I wanted to create a string which is guaranteed not to represent a filename, I could put one of the following characters in it on Windows:
\ / : * ? | < >
e.g.
this-is-a-filename.png
?this-is-not.png
Is there any way to identify a string as 'not possibly a file' on Linux?
There are almost no restrictions - apart from '/' and '\0', you're allowed to use anything. However, some people think it's not a good idea to allow this much flexibility.
An empty string is the only truly invalid path name on Linux, which may work for you if you need only one invalid name. You could also use a string like "///foo", which would not be a canonical path name, although it could refer to a file ("/foo"). Another possibility would be something like "/dev/null/foo", since /dev/null has a POSIX-defined non-directory meaning. If you only need strings that could not refer to a regular file you could use "/" or ".", since those are always directories.
Technically it's not invalid but files with dash(-) at the beginning of their name will put you in a lot of troubles. It's because it has conflicts with command arguments.
I personally find that a lot of the time the problem is not Linux but the applications one is using on Linux.
Take for example Amarok. Recently I noticed that certain artists I had copied from my Windows machine where not appearing in the library. I check and confirmed that the files were there and then I noticed that certain characters in the folder names (Named for the artist) were represented with a weird-looking square rather than an actual character.
In a shell terminal the filenames look even stranger: /Music/Albums/Einst$'\374'rzende\ Neubauten is an example of how strange.
While these files were definitely there, Amarok could not see them for some reason. I was able to use some shell trickery to rename them to sane versions which I could then re-name with ASCII-only characters using Musicbrainz Picard. Unfortunately, Picard was also unable to open the files until I renamed them, hence the need for a shell script.
Overall this a a tricky area and it seems to get very thorny if you are trying to synchronise a music collection between Windows and Linux wherein certain folder or file names contain funky characters.
The safest thing to do is stick to ASCII-only filenames.

Resources