How do i create page.properties file for Bonitasoft - windows

Please How do i create file with encoding ISO 8859-1 files on windows Machine? I am kind of lost here and i actually don't know where to start.
I wanted to use Notepad or Notepad++ to create a pages.properties file for Bonitasoft as i built a REST api so i can use to test the REST (Webservice was coded in groovy Rest) and later on use in my UI designer, I built using the Java and Bonitasoft Jar files, but when i click save as , i only see for UTF-8 How do i go about it.

page.properties file in Bonita will be read by Bonita Engine, a Java application, and so need to use, as you find out, ISO 8859-1 character encoding (see Java Properties class documentation for the details).
When creating a file using Notepad++ you can choose the character sets from the "Encoding" menu -> "Character sets" -> "Western Europe" -> "8859-1". For encoding you should select "ANSI" from the "Encoding" menu. "ANSI" should be compatible with "8859-1".

Related

Output file isn't encoded in UTF-8 when using a standalone Talend job

I have a simple Talend job that reads a CSV file as entry, sends a SOAP request to a webservice and then returns some fields of the response in a CSV file as output. The job deals with adresses throughout Europe, so the various fields of the output can have accents or non-latin characters (e.g. for adresses in Belarus) in them.
When I run the job inside Talend Open Studio, my output file is correctly encoded in UTF-8 and all the special characters appear fine when I open the file in Notepad++. However, when I export the job as a standalone (using the "Build Job" menu option) and run the .bat file, none of the special characters are correctly encoded. When I open the file in Notepad++ it clearly says that it's encoded in UTF-8, but the end result is still wrong.
Am I missing something, or doing something wrong? I haven't found any option in Talend besides choosing "UTF-8" as encoding in the advanced options of my tFileOutputDelimited component.
Thanks in advance for your help
Passing -Dfile.encoding=UTF-8 as an argument to your JVM should solve the issue.
In order to set this in Talend, you can use the advanced settings tab in the Run view, and add a JVM argument: -Dfile.encoding=UTF-8
You can set this globally in Talend preferences as well: Windows > Preferences > Talend > Run/Debug

Notepad++, Atom, encoding seems to be broken

I have to edit php (.inc) file which was created a long ago and I don't know which editor was used to create it. The Cyrillic letters in Notepad++ are shown like they were in wrong encoding:
In GitHub's Atom editor, Cyrillic letters are totally lost and replaced with the � character:
But in browser everything is displayed correctly! The same is true when using Windows Notepad. Why it is displayed incorrectly in code editors and is there a way to make it look normal?
P.S. OK the thought that I just can copy it from windows notepad and save in notepad++ only now came to me :D But still curious why this happened to code editors.
P.S.2 Problem is solved. Editors just didn't recognize the original encoding properly. When I changed it manually to Windows1251, everything became ok.
Atom's support for encoding isn't as mature as some other editors out there, as you have already discovered you can change the encoding in the bottom right hand corner and Atom will remember it, however there are some packages which help further:
Out of the box as you have discovered Encoding Selector which allows you to choose how Atom interprets the contents of the text file.
There is a package that automatically select encoding for you named Auto Encoding, however it does have some issues with certain types of file, you might find this isn't a problem.
Finally, there is my personal favorite, editor-settings, which allows you to set the encoding of all files of a specific language, with a specific file extension or or directory.
As an example if you wanted to configure all .inc files in a directory to use windows-1251 create a .editor-settings in the directory you are using and paste in the following:
encoding: utf-8
extensionConfig:
inc:
encoding: windows-1251

TextPad and Unicode: full support?

I've got some UTF-8 files created in Mac, and when trying to open them using TextPad in Windows, I get the following warning:
WARNING: (file name) contains characters that do not exist in code
page 1252 (ANSI Latin 1). They will be converted to the system default
character, if you click OK.
Linux (GNOME gEdit) can open the same file without complaints. What does the above mean? I thought that TextPad had full UTF-8 support. Can I safely open and edit UTF-8 files using it without corrupting the file?
It seems that TextPad cannot handle characters outside windows-1252 (CP1252, here carrying the misnomer “ANSI Latin 1”). I tested it on Windows, opening a plain text file created on the same system, as UTF-8 encoded, both with and without BOM, with the same result. The program’s help does not seem to contain anything related to character encodings, and its tools for writing “international characters” are for Latin-1 characters only.
There are several text editors for Windows that can deal with UTF-8 (even Notepad can open a UTF-8 file, but it can hardly be recommended for serious editing). See Alan Wood’s collection of information on Unicode editors and word processors for Windows. (Personally, I like Notepad++ and BabelPad, which are both free.)
TextPad 8, the newest as of 2016-01-28, does finally properly support BMP Unicode. It's a paid upgrade, but so far has been working flawlessly for me.
TextPad ‘supports’ UTF-8 and UTF-16 documents only in as much as it will import and export them. But it still edits files as simple bytes, and not Unicode characters (using the ANSI code page, which is code page 1252 for Western European).
So unless the file happened to contain only characters that also exist in that code page, you will lose content. This rather defeats the point of Unicode.
Indeed, this was the issue that made me flee—to EmEditor, at the time, though now I would agree with the previous comments and recommend Notepad++. The era of paying for text editors is long gone.
Actually TextPad does support displaying Unicode code points granted they went about it the wrong way. In order to display the Unicode characters you have to choose Configure->Preferences and expand "Document Classes->Text->Font.
You need to choose a Unicode font AND set the Script to match. E.g. Arial Unicode MS with script CHINESE_BIG5.
However, this is a backward approach since the application should handle this when the user tells TextPad to open the file in Unicode or UTF-8. The built in Notepad application with MS Windows will detect the encoding automatically and display the glyphs correctly based upon the encoding.
I found a discussion on this in the Textpad forums:
http://forums.textpad.com/viewtopic.php?t=11019
While I have Notepad++, Textpad handles large files with ease while other editors I've tried, including Notepad++, either slow to a crawl or die. I'm currently trying to edit a 475MB file and Notepad++ is not up to the task.
Textpad Configure Menu --> Preferences --> Document Classes --> Default --> Default encoding --> UTF-8
Try the ANSI code set with File/Open, that should solve the problem in TextPad

Notepad++ : Custom Syntax Highlighting for .txt files

I keep code samples that I find useful as text files on my computer. I store them as txt files as opposed to the language in which they are written, so that they will open in Notepad++ instead of the editor (i.e. I don't want my c++ examples to open in an IDE, just Notepad).
Is there a way I can have Notepad++ apply appropriate syntax highlighting to the text file by reading a special code in the text file itself?
For example if I had some sql, the first line of the text file could read like this:
##Language=SQL
... my sql code properly highlighted as sql ...
Thanks in advance. I realize I could just choose the language after opening the file (i.e. Language > SQL), but it would be much more convenient if it could do it automatically.
No, it can't. You can choose it manually or use special file type extensions which you then associate with Notepad++ and tell it to highlight the files as the appropriate language.
For example, use .txtsql files for SQL, .txtcpp files for C++ and so on.
I ended up writing it myself:
You need the Python plugin
Add the code below to your startup.py file
Switch your Python Initialization setting from "LAZY" to "ATSTARTUP"
#if found determine the menu command and switch language in NPP
def switch_language_view(args):
notepad.activateBufferID(args["bufferID"])
lineone = editor.getLine(0)
if '##' in lineone:
lineone = lineone[lineone.rfind('##'):].replace('##', '')
lineone = "MENUCOMMAND." + lineone.upper()
try:
notepad.menuCommand( eval(lineone) )
except:
pass
#command to link notification
notepad.callback(switch_language_view, [NOTIFICATION.FILEOPENED])
I'd suggest giving them the proper file extensions, then import something like this into your registry:
Windows Registry Editor Version 5.00
[HKEY_CLASSES_ROOT\*\shell\NotepadPlusPlus]
[HKEY_CLASSES_ROOT\*\shell\NotepadPlusPlus\command]
#="C:\\path\\to\\notepad++.exe \"%1\""
Then you can open your files in NP++ with a quick right-click, and NP++ will be able to auto-detect the right language based on the file extension.
Manual selection is a much simpler way. Store all the files in .txt format (irrespective of java or C or C++). Open the file in Notepad++ and select the corresponding language in the Menu. e.g. Language --> Java.
You could try some npp scripting,
python
lua
and/or hacking macros. you could make the script start conditionally, check your special string and select the language for you.

Force Visual Studio (2010) to save all files in UTF-8

Is there any way I can force Visual Studio (2010) to save all files in UTF-8, always?
I do not know of a way to force it to save everything in UTF-8, but you can do so on a case-by-case basis. When you first save a document and the Save As... dialog appears, the Save button will actually be a drop-down button with two options. You want "Save with Encoding...", which will then present you the entire list of installed Windows encodings.
The encoding you really want is way down the bottom:
Unicode (UTF-8 without signature) - Codepage 65001
although if you want to save yourself a lot of pain, you will probably want to pick the option near the top:
Unicode (UTF-8 with signature) - Codepage 65001
The difference is that the latter option stick the UTF-8 signature (which is just the UTF-16 byte-order mark encoded in UTF-8). This is one of my pet peeves, as UTF-8 doesn't have multiple byte orders, so the BOM is redundant at best, and breaks all kinds of text processing tools at worst. MS uses it to "detect" UTF-8 automatically, since for single-byte character, UTF-8, ISO-8859-1, and CP-1252 are identical except for a sequence of 32 characters (0x80 - 0x9f) that MS basically made up.
If you only ever edit or process your files with Visual Studio or the .NET tools, then saving with signature will probably work fine. If you need to save files for use by other tools (batch files, SQL queries, PHP scripts, etc), the signature will cause problems, and you should save them without it. If you do this, you may want to enable the option (Under Tools -> Options -> Text Editor) to "Auto-detect UTF-8 encoding without signature", or else, right-click on the file and chose "Open With..." and select the editor option that says " editor with Encoding".
I think it saves files in the current codepage. There's an option under Tools->Options->Environment->Documents that will make it save in unicode when it cannot save in current codepage. But I don't know if that helps...
I think you want to try ForceUtf8(with BOM)/ ForceUtf8(without BOM) extenstion.
Just search UTF8 on VS extension gallery(Tools -> Extension and updates)

Resources