I am a plone Newbie and I needed to change a translated word in the .po file, the translation is in Arabic. When I changes the word to the right word and restarted the zope. My plone site is no more reading the Arabic translations from this file and displays question marks instead.
When I searched I found that I must do some synchronization with the .pot file (translation catalog) but I think this is not the actual problem. Any clue?
You may have saved the file with the wrong encoding. Try saving the file as UTF-8.
I seems that you edit the file with a normal text editor. That can corrupt the files charset. To avoid this, you can use a free translation program such as poedit. Thoose programs normally take care of the charset. Use google with "translate .po YOURPLATTFORM". There are tons of free tools.
With this tools you can import .pot files also and (re)generate .po files from them.
You just have to restart plone, when you're done, editing the file.
If you have a good editor, you can try guessing the charset and correct it manually. But this often leads to the described problem and holds the risk that you may forgot to safe with the correct charset in a later edit process.
Related
I am editing kml files of maps of history and science of files that already appear on http://climateviewer.org/. I am editing them in Sublime text and/or Notepad since all I am doing is editing text, deleting extended data and switching links and references from my old web site MyReadingMapped to the new site which has far better technology. You can see images of the maps I made at http://climateviewer.org/myreadingmapped/
BTW, I am not a programmer or developer of software, but rather a retired marketing communications professional who understands just enough coding to make these changes and can do some html as well.
The problem I am having is that of the 30 or so files I have edited so far 4 have a parsing error that consistently involves closing a Placemark. Yet there appears to be nothing wrong with the code. I am testing the files by uploading them to Google Earth to get the error statements. And so far I have fixed many problems but I can't seem to solve this problem. Jim Lee, ClimateViewer's creator tells me to debug them.
How do I debug them and is it something I would be able to learn without formal training?
There are several tools available to debug a KML file, which is simply an XML file that must conform to rules of the KML specification. As an XML file, all start and end tags must match. In addition, the tags are case-sensitive.
The easiest trick is using a web browser to validate it. Simply rename the KML file to an XML file (rename .kml extension to .xml) then drag the .xml file onto the open web browser. Parsing errors will be identified with row and column number.
Next, you can upload the KML file to KML Validator to get a list of potential errors that need to be fixed or run the standalone command-line XmlValidator tool.
Additional tips to fix KML files are described here along with details about KML validation.
I have an eclipse CDT C project on a windows machine with all files, inc. doxy file, encoded as UTF-8.
UTF-8 has been specified as encoding within the doxy file as well.
Yet the LaTex files produced are encoded in ISO-8859-1.
In fact if I open a .tex file (with TexWorks), change the file encoding, save it and close it, when I re-open it the encoding is still marked as ISO-8859-1.
This means that UTF-8 symbols (such as \Delta) in the source make it through a doxygen build OK, but cause the PDF make to fail.
Im not at all familiar with LaTex, so not sure where to even start searching on this one, Google queries to date have been fruitless. I'm also still not overly sure if this is a Doxygen, Tex or windows issue that causes the .tex file encoding to be ISO-8859-1!
Thus it would be good to know that, even though there's no specific option for setting doxygen .tex output encoding, would it be set to the same as the DOXYFILE_ENCODING setting?
Assuming that is the case, then moving one of the .tex files from the project folder to the desktop and attempting the encoding change via TexWorks still fails to hold, so it leads me to think either windows or TexWorks is preventing the encoding being UTF-8, but lack of knowledge on encodings and LaTex has left me at a loose end here, any suggestions on what to try next?
Thanks
:\ I basically just ended up re-installing everything and making sure git ignored the tex files and handled the PDF files separately to the code files, so that the encoding was forced. Not really a fix, but it builds.
How can I edit info.plist file of xcode project? I have tried searching a lot but nothing specific.
An info.plist file can be considered 2 ways. One, it's just a specially formatted text file so thinking that way you can manipulate the text directly. Applescript can read text files, manipulate text, and write text files. Two, it's a basic xml file formatted with apple's tags to create a "plist" file. So you could use xml tools on the file as well. System Events has xml tools. There's also a unix command line program called "defaults" that can work on them as well.
So there's several tools. You need to think about what you want to do, how complicated the task is, and then decide which tool will best fit your requirements.
After you figure out those basics, try some things and come back and ask specific questions.
I have problems with files encoding in Visual Studio 2008. While compiling I'm getting such errors:
When I'm trying to open file where particular error occures, encoding window appears:
By defualt auto-detect is set. When I change encoding option to UTF-8, everything works. If I open each problematic file in my project using UTF-8 encoding, project starts to compile. The problem is I have too many files and there is ridiculous to open each file and set encoding to UTF-8. Is there any way to make this in a quick way ?
My VS settings are:
I'm using Windows Server 2008 R2.
UPDATE:
For Hans Passant and Noah Richards. Thanks for interaction. I recently changed my operating system so everything is fresh. I've also downloaded fresh solution from source control.
In OS regional settings I've changed system locale to Polish(Poland):
In VS I've changed international settings to the same as windows:
The problem is still not solved.
When I open some .cs files using auto-detection for encoding, and then check Files -> Advanced Save Options..., some of this .cs files have codepage 1250:
but internally looks following:
It is wired, because when I check properties of such particular files in source control, they seems to have UTF-8 encoding set:
I don't understand this mismatch.
All other files have UTF-8 encoding:
and opens correctly. I have basically no idea what is going wrong because as far as I know my friend has the same options set as me, and the same project compiles for him correctly. But so far he happily hasn't encountered encoding issues.
That uppercase A with circumflex tells me that the file is UTF-8 (if you look with a hex editor you will probably see that the bytes are C2 A0). That is a non-breaking space in UTF-8.
Visual Studio does not detect the encoding because (most likely) there are not enough high-ASCII characters in the file to help with a reliable detection.
Also, there is no BOM (Byte Order Mark). That would help with the detection (this is the "signature" in the "UTF-8 with signature" description).
What you can do: add BOM to all the files that don't have one.
How to add? Make a file with a BOM only (empty file in Notepad, Save As, select UTF-8 as encoding). It will be 3 bytes long (EF BB BF).
You can copy that at the beginning of each file that is missing the BOM:
copy /b/v BOM.txt + YourFile.cs YourFile_Ok.cs
ren YourFile.cs YourFile_Org.cs
ren YourFile_Ok.cs YourFile.cs
Make sure there is a + between the name of the BOM file and the one of the original file.
Try it on one or two files, and if it works you can create some batch file to do that.
Or a small C# application (since you are a C# programmer), that can detect if the file already has a BOM or not, so that you don't add it twice. Of course, you can do this in almost anything, from Perl to PowerShell to C++ :-)
Once you've opened the files in UTF-8 mode, can you try changing the Advanced Save Options for the file and saving it (as UTF-8 with signature, if you think these files should be UTF-8)?
The auto-detect encoding detection is best-effort, so it's likely that something in the file is causing it to be detected as something other than UTF-8, such as having only ASCII characters in the first kilobyte of the file, or having a BOM that indicates the file is something other than UTF-8. Re-saving the file as UTF-8 with signature should (hopefully) correct that.
If it continues happening after that, let me know, and we can try to track down what is causing them to be created/saved like that in the first place.
CKeditor's installation instructions tell me to just unzip the whole distribution file on my webserver's production directory. But it is full of files I definitely don't want there, like source code, examples, and even server-side code in PHP. I got rid of most of these files but there is one I'm not sure about: contents.css.
I can see this file uses a lot of styles I definitely don't want to see in my site. My question is:
Is contents.css required by CKeditor, or used by default? Do I even need this file on my production site?
I suppose it depends on what you're using in CKeditor, or what you plan to use later. Personally, I'd suggest renaming the file (something like) contents.css.old and creating a new contents.css file, copy across all the styles that you think you'll need and then destruct-test your implementation of CKeditor, to assess whether you've got all the styles that you need.
Add to, or remove from, that file to get your finished version and then use that one. I'd strongly advise keeping the original version around though, for future development purposes.
To your specific questions, though:
Is contents.css required by CKeditor, or used by default?
I believe so.
Do I even need this file on my production site?
Not so far as I know, its absence will likely cause things to look a little less-pretty, though, until you apply your own styles.
As suggested, above, though, I'd rename the original and then create your own stylesheet with the same name, it's rather easier than going through all the various js files looking for, and changing as appropriate, references to contents.css.