Mac text editor that support LineBreak and Encoding detection/change - macos

I'm used to windows and I try to find replacement tools in mac world.
I use Notepad++ on Windows and I frequently use the ability to detect and change the charset and the line break mode of a file.
I find some editor who support linebreak changing but I didn't find an editor that can display the charset of the current file and change it in an other.
Thank you for your help!

Yes: I recommend BBEdit or its free smaller brother TextWrangler. Both are very good and both handle changes to encoding and line breaks etc well.

Related

Notepad++, Atom, encoding seems to be broken

I have to edit php (.inc) file which was created a long ago and I don't know which editor was used to create it. The Cyrillic letters in Notepad++ are shown like they were in wrong encoding:
In GitHub's Atom editor, Cyrillic letters are totally lost and replaced with the � character:
But in browser everything is displayed correctly! The same is true when using Windows Notepad. Why it is displayed incorrectly in code editors and is there a way to make it look normal?
P.S. OK the thought that I just can copy it from windows notepad and save in notepad++ only now came to me :D But still curious why this happened to code editors.
P.S.2 Problem is solved. Editors just didn't recognize the original encoding properly. When I changed it manually to Windows1251, everything became ok.
Atom's support for encoding isn't as mature as some other editors out there, as you have already discovered you can change the encoding in the bottom right hand corner and Atom will remember it, however there are some packages which help further:
Out of the box as you have discovered Encoding Selector which allows you to choose how Atom interprets the contents of the text file.
There is a package that automatically select encoding for you named Auto Encoding, however it does have some issues with certain types of file, you might find this isn't a problem.
Finally, there is my personal favorite, editor-settings, which allows you to set the encoding of all files of a specific language, with a specific file extension or or directory.
As an example if you wanted to configure all .inc files in a directory to use windows-1251 create a .editor-settings in the directory you are using and paste in the following:
encoding: utf-8
extensionConfig:
inc:
encoding: windows-1251

Mac Terminal: How to view a microsoft word file? (It says it is binary)

So I was writing a paper on Microsoft Word and the file is corrupt now. I'm trying to see if I can open the file using vim, but it says it is binary. Is there any command or any way to convert it into text so that I can just vim myfile.doc and copy the text contents? I tried doing a cp myfile.doc myfile.txt to change the extension but it still says it's binary.
A doc file is a proprietary format by Microsoft. Docx formats are xml based but neither can be read directly using a text editor. If your file is corrupt you're probably going to have a lot more luck try to find the autosave location or try and recover the document using the tool office provides. In future remember to back up your work ;)
/usr/bin/strings may be helpful -- built-in to OS X. Hope you can recover your paper.
.doc or .docx is not a plane txt file. It has several formatting and a bit of binary and in .docx xml factor included.
You can go for OpenOffice which is Free.
You might try using Antiword to convert to .txt if it can still access the file properly.
http://en.wikipedia.org/wiki/Antiword
Word itself has an option to "Recover Text From Any File" which is worth trying if you haven't done so already. When you open the file in Word, if it doesn't recognise the format, you should see a conversion dialog and the option is in there. You might have to check the "Confirm conversion at open" option (e.g. "Word Preferences->General->Confirm conversion at open" on Mac Word 2011, "File->Options->Advanced->General->Confirm file format conversion on open" on Windows Word 2010.

In Win7, Unicode/ UTF-8 text file: gibberish on Windows console (Trying to display hebrew)

I have a wide-character file (with Hebrew text) that looks fine in Notepad (saved in "UTF-8 encoding"), reads fine in Notepad++, and when I copy-and-paste into MS Word it looks fine too. But when I open a "DOS box" (Windows console) and go: "type file.txt", it prints gibberish.And yes, I've done all the recommendations for Unicode on Windows console: I opened the console using "cmd /u", I changed the font to Lucida, and I've entered: "chcp 65001".
The problem is identical on a PC running Windows 7, and on another PC running Windows XP SP3.
The Font Courier New supports hebrew and can be added to the command prompt. The default fonts are consolas, lucida, raster, none of them support hebrew. So add Courier New to the command prompt.
It's a registry hack to do that
http://www.howtogeek.com/howto/windows-vista/stupid-geek-tricks-enable-more-fonts-for-the-windows-command-prompt/
http://www.techrepublic.com/blog/windows-and-office/quick-tip-add-fonts-to-the-command-prompt/
This is a good example of how to install fonts, but I should remove a lot of these entries, because most of them didn't get added to cmd because cmd didn't support them.
Lucida and Consolas are defaults.
Raster is a default not listed here maybe 'cos it's a TTF
Of all these I tried to add, only 3 added(are supported by cmd)
Courier New, DejaVu Sans Mono, Droid Sans Mono
DejaVu Sans Mono and Droid Sans Mono are downloadable, supported by cmd, might have some good unicode support/characters, but don't include Hebrew
I have
Consolas <-- default
Courier New <--- added
DejaVu Sans Mono <-- added
Droid Sans Mono <-- added
Lucida Console <-- default
Raster Fonts <-- default
Common hebrew fonts are Miriam and David, but they can't be added to the command prompt.
For the record, Babelmap can list all fonts on your system that support hebrew e.g. in babelmap- click fonts..font coverage, then enter 05D0(that's aleph). I think all these fonts exist on a default windows 7 installation
Aharoni, Arial, Courier New, David, FrankRuehl, Gisha, Levenim MT, Lucida Sans Unicode, Microsoft Sans Serif, Miriam, Miriam Fixed, Narkisim, Rod, Segoe WP, Tahoma, Times New Roman
But most or all of those fonts with hebrew aren't supported in the command prompt, except Courier New. In fact most fonts full stop aren't supported in the command prompt, not even "times new roman"(because "times new roman" is not mono-spaced / fixed width, and that's one of a number of criteria for it to be supported, other criteria seem to be more obscure).
So now you can have Courier New added and selected for use in the command prompt.
And so you can paste unicode characters onto cmd provided the selected font supports it.
To copy/paste, click the Copy button in charmap
Now it's in the clipboard
To paste it into the command prompt, in win7 paste into command prompt isn't ctrl-v. You right click and choose paste. (or if in quickedit mode then just rightclick)
That's the main thing.
Additionally
Often in windows one might use notepad and character map.. but one should be aware of some limitations with them.
Character map shows the first 65536 unicode characters when the font you selected supports it, and character map shows you the UTF-16 code. That's ok, you can still paste from character map into a cmd.exe window, but you should know that commands run in cmd.exe and pipes don't support utf-16. So you can use character map, find a character e.g. aleph 05d0, but it's worth looking up the character on http://www.fileformat.info/info/unicode/char/05d0/index.htm and seeing that while the utf-16 code is 05d0, the utf-8 code is d790. The xxd command and file command is useful for seeing the real contents of a file and determining the file's type.
Notepad is a bit limited when it comes to unicode or any character in the unicode character set whose UTF16 code is > FF. And cmd is a bit limited in regard to some commands like 'type', and in regard to pipes and redirection.
If using cmd.exe you really need pipes to work 'cos pipes are important..
Pipes are limited to the encodings that can be specified by the CHCP Command.
(Note that if CHCP tells you you are on a particular codepage, e.g. 850, it's telling you the input encoding. If you run the command chcp 850 it will change both the input and output encodings. Usually they are the same. It's simpler when they are the same. But if you used some other program to change the encoding of cmd eg the c# compiler has a switch that changes it, then it's best to change it with chcp so you know both encodings are set ).
There is a CHCP 1200 (UTF-16LE) and 1201(UTF-16BE) , but neither are supported, if you try it it will say invalid codepage (tested in win7). CHCP doesn't support UTF-16(it doesn't support UTF16LE or UTF16BE). There is CHCP 65001 (That's UTF-8 without BOM). And there is CHCP 862 (the old fashioned way as in MSDOS days way, of encoding Hebrew, that I mentioned)
The type command supports UTF16LE as does notepad(What notepad calls Unicode, is UTF-16 LE), But pipes and redirection don't support that. The type command also supports any codepage specified/supported by CHCP. So type supports 862 or 65001.
So you could use notepad save it as UTF8 (which is with BOM), then fiddle around to remove the BOM. (That's a bit overkill).. Or you could use notepad, save it as Unicode UTF 16LE.. But then you can't sue pipes.. (that's bad).. Easiest thing to do is use a text editor like notepad2 or notepad++, that supports UTF8 without BOM.
Or if doing everything from cmd you could use 862 or 65001. Though many text editors might not give good support of 862. So you might prefer 65001.
If you want to write any file in notepad and it has a character greater than what in UTF16 is referred to as \uFF, and you want to run commands in cmd.exe on that file, then some commands (e.g. the type command), will have problems if you don't take into account what is supported by what.
Notepad supports UTF-16BE, UTF-16LE and UTF-8 with BOM. That's not good. And no need to fiddle around with xxd and sed or other commands to remove the BOM. If you have any file with a so-called unicode character, a character outside of the regular ascii range. A character > UTF-16's \uFF, as shown by character map as being > \uFF, then use Notepad2 or notepad++
Type supports UTF16LE, and any codepage set by CHCP e.g. 65001 or 862.
Pipes and redirection go by whatever is set by CHCP.
Codepage 862 is old so Codepage 65001 is a good way to go.
xxd and file are useful for seeing how a file is encoded which can be helpful if you have issues. But not absolutely necessary.
So if you want to write a file for use in CMD, and it has some unicode characters, while thee are some commands like xxd and sed that could be used to remove a BOM, and other commands to do so. The easiest way to make such a file in a text editor is to use a text editor like notepad2 or notepad++ which supports UTF8 without BOM.
Getting hebrew displaying might be the most important thing to do first, as described above. And the next thing is being able to save files in a text editor that you can display with e.g. 'type'.
And if you ever want to copy from the command prompt, if not in quickedit mode, then right click then choose mark then select it then hit ENTER. And to paste right click and choose paste.
An further additional point is
Apparently there are bugs in chcp 65001 where some batch files won't run and maybe some C programs won't work either. How to use unicode characters in Windows command line? And i've even seen the c sharp compiler crash when cmd is in codepage 65001 (though one may blame the c sharp compiler, one could also blame 65001) Why is csc.exe crashing when I last left the output encoding as UTF8?
Note- an earlier revision of this answer had some command line examples but they were unnecessarily complex. I might at some point add some commands that demonstrate what I have been describing but it's fairly trivial.
/u is for UTF-16LE, not UTF-8. This is why saving the file as UTF-16LE (what Windows/Notepad misleadingly calls "Unicode") and running with /u works, in as much as it does.
UTF-8 should be achievable with chcp 65001, but there are some nasty low-level bugs in the Microsoft C Runtime for this code page, which makes some apps unreliable and some not run at all.
So yeah, I'm sorry, but UTF-8 is a second-class citizen under Windows. Anything that uses the 'ANSI' interfaces for IO, including anything that uses the C standard IO library, including the Command Prompt, won't be able to cope with it properly.
The only reliable way to get Unicode output in Command Prompt is to use the Windows-specific WriteConsoleW interface to push Unicode strings directly. Unfortunately as this is not available cross-platform, many tools won't use it.
In any case, even when you've got the encoding right, you still have to have a font in the Command Prompt that contains the characters you want. I believe this is why you still aren't getting Hebrew in the /u+UTF-16LE route.
Summary: Command Prompt + non-ASCII == almost certain fail. Give up and find some other interface you can use that supports Unicode better.
You should convert file.txt to UTF-16(Little Endian) before type file.txt
Reference: What encoding/code page is cmd.exe using?
I presume you mean "Lucida Console" when you say "Lucida".
Using the charmap application I couldn't find any Hebrew characters in the font. I don't know if the font was more capable in earlier versions of Windows, but in Windows 7 there appears to be nothing outside of the European characters.
My system also has Lucida Sans Typewriter which does include the Hebrew characters. Unfortunately the Cmd window doesn't show it as a choice. You need to edit the registry to open up more choices, as shown in this question on SuperUser: https://superuser.com/questions/5035/how-to-change-the-windows-console-font
P.S. I have been unable to verify this solution because Windows is being difficult. See https://superuser.com/questions/390933/how-to-add-a-font-to-the-cmd-window-choices-in-windows-7-64-bit
How to get an Hebrew enabled XP installation?
First of all, this is about an XP home SP3, Hebrew enabled. By that I mean it is a standard XP US installation, or so I believe, with the addition of Hebrew capabilities for keyboard and display. I believe every XP CD can install such a system. In particular, I believe the following is all that is needed for such a system:
Control panel -> Date, Time, Language and Regional Options -> Language and Regional Options -> in Language tab:
1) Click Details and add an Hebrew keyboard.
2) mark with a V the Install files for complex script and right-to-left languages (including Thai) option.
Control panel -> Date, Time, Language and Regional Options -> Language and Regional Options -> in Advanced tab:
Accept, mark with a V, 10004 (MAC - Arabic) and 10005 (Mac - Hebrew). Not sure if Arabic is a must have here.
Now to the cmd console
One has to explicitly add Courier New fonts to the console fonts registry, as described earlier. Otherwise, explicit Hebrew fonts will not be displayed.
Now when cmd console is opened, all there is to do in order to input Hebrew characters is to enable the Courier New fonts, and change the keyboard to an Hebrew mode. Having Windows scroll the languages it has for the keyboard is easy. Either repetitive pressing of left Alt combined with left shift keys, or with the mouse.
As an aside, a dir command will show file names that have Hebrew characters. However, one can't just issue a
dir file_name
and see the usual output if the file begins with a Hebrew letter. It must be
dir *file_name
I assume the asterisk character adds the BOM unicode character.
One can also open Notepad, input Hebrew characters, save the file as UTF8, and run the following in the console commands:
chcp 65001
type that_Notepad_file_I_saved
Saving the file as UTF8 is done on Notepad save screen.

TextPad and Unicode: full support?

I've got some UTF-8 files created in Mac, and when trying to open them using TextPad in Windows, I get the following warning:
WARNING: (file name) contains characters that do not exist in code
page 1252 (ANSI Latin 1). They will be converted to the system default
character, if you click OK.
Linux (GNOME gEdit) can open the same file without complaints. What does the above mean? I thought that TextPad had full UTF-8 support. Can I safely open and edit UTF-8 files using it without corrupting the file?
It seems that TextPad cannot handle characters outside windows-1252 (CP1252, here carrying the misnomer “ANSI Latin 1”). I tested it on Windows, opening a plain text file created on the same system, as UTF-8 encoded, both with and without BOM, with the same result. The program’s help does not seem to contain anything related to character encodings, and its tools for writing “international characters” are for Latin-1 characters only.
There are several text editors for Windows that can deal with UTF-8 (even Notepad can open a UTF-8 file, but it can hardly be recommended for serious editing). See Alan Wood’s collection of information on Unicode editors and word processors for Windows. (Personally, I like Notepad++ and BabelPad, which are both free.)
TextPad 8, the newest as of 2016-01-28, does finally properly support BMP Unicode. It's a paid upgrade, but so far has been working flawlessly for me.
TextPad ‘supports’ UTF-8 and UTF-16 documents only in as much as it will import and export them. But it still edits files as simple bytes, and not Unicode characters (using the ANSI code page, which is code page 1252 for Western European).
So unless the file happened to contain only characters that also exist in that code page, you will lose content. This rather defeats the point of Unicode.
Indeed, this was the issue that made me flee—to EmEditor, at the time, though now I would agree with the previous comments and recommend Notepad++. The era of paying for text editors is long gone.
Actually TextPad does support displaying Unicode code points granted they went about it the wrong way. In order to display the Unicode characters you have to choose Configure->Preferences and expand "Document Classes->Text->Font.
You need to choose a Unicode font AND set the Script to match. E.g. Arial Unicode MS with script CHINESE_BIG5.
However, this is a backward approach since the application should handle this when the user tells TextPad to open the file in Unicode or UTF-8. The built in Notepad application with MS Windows will detect the encoding automatically and display the glyphs correctly based upon the encoding.
I found a discussion on this in the Textpad forums:
http://forums.textpad.com/viewtopic.php?t=11019
While I have Notepad++, Textpad handles large files with ease while other editors I've tried, including Notepad++, either slow to a crawl or die. I'm currently trying to edit a 475MB file and Notepad++ is not up to the task.
Textpad Configure Menu --> Preferences --> Document Classes --> Default --> Default encoding --> UTF-8
Try the ANSI code set with File/Open, that should solve the problem in TextPad

Displaying Hebrew text in a console

How to add a new font to the console (win7), and where can I find the right font in hebrew?
I've already checked this, but it didn't help.
Thanks.
There is another alternative console - ConEmu (open source too). It may be more useful for you.
I'm an author of this utility.
Here is a short list of its advantages: proportional and bdf fonts support, ANSI X3.64 and Xterm 256 colors, run simple GUI apps in tabs, text search in console, configurable status bar, optional settings (e.g. pallette) for selected applications...
In case you just want it for short testing purposes while debugging, just use Debug.WriteLine that does support unicode (tested with heb chars only).
This will enable you to get some sort of output while debugging the program.
Just download console2. It's an alternative console for Windows.

Resources