Is it possible to get ColdFusion to log UTF-8 data (eg: Chinese characters) using <cflog>?
By default it just logs questions marks instead of the characters.
I know I could open/write/close the log file using the file API, but I'm don't want to over-complicate something as simple as logging.
Yeah, but you need to tell your entire JVM to process files as UTF-8. You can do this via adding this to your java.args in jvm.config:
-Dfile.encoding=utf8
Reference.
You may also be able to set log4j.appender.LOGFILE.encoding=UTF-8 in ColdFusion's /lib/log4j.properties file and not affect the whole VM. Although if Adam's solution works, I'd not necessarily change it.
Related
So I'm trying to compress some documents I made when I get the following error message:
I have no idea what the character is, as it just looks like a blank space. I have removed the blank spaces from my documents and it still won't let me zip it. Online answers seem to refer to needing the change the language setting on my computer, but I haven't written any foreign languages. Any help would be appreciated.
Go to Users directory and make a new directory called 'Analytics',
Then, move you 'Account_Over_Time_Analysis' to this folder and try to comporess again.
If it fails again, please try 7zip incase your using something else.
Such an error could be caused from different language dir-name, a name with spaces or a name with escape chars.
To fix this you could hunt around for the correct language pack, or just install 7-Zip and use that to zip the files instead
I want to create a workaround for the URL protocol file: in Chrome, as it's security doesn't allow you to open certain files or locations. This would be for a small app of mine that I designed.
I searched around, and while I've found a lot of potentially good answers (such as this answer), I don't fully understand what each line should do, and whether it would actually work in my application.
The end-result that I want for this is to have a protocol like ih-link: that would allow me to open up links in Windows Explorer or elsewhere, similar to how you can do so in say, Outlook or Microsoft Word (if you create local or network shortcuts to a folder or file, it'll open it up without issue). Attempting this in Google Chrome gives varied results, either a browser-generated directory, or an error ususally.
I'd like to know, assuming the answer I linked to would work for me...
Is "URL Protocol"="" where I'd define the name of the protocol, so I'd use "URL Protocol"="ih-link", or something like that? I remember reading something about leaving that blank, so would I replace IntranetFileLauncher with whatever protocol name I want (in this case, it'd be ih-link?)
The next part that references explorer.exe seems to just reference the icon.
After that, what do the following two lines do?
[HKEY_CLASSES_ROOT\IntranetFileLauncher\shell]
[HKEY_CLASSES_ROOT\IntranetFileLauncher\shell\open]
The last lines of that script appear to just have it strip the protocol from the URL and pass the file path to explorer.. am I correct on this? I take it I would need to format the path as I would do for windows (using \ instead of /)?
I'm afraid to experiment without knowing more, mainly because I know that many things with the registry can be very dicey, so any clarification on this would be helpful.
Reading the actual documentation is better than trying to guess what some random code sample does.
URL Protocol is just a marker, it does not need a value. The default value (#) is where the name of your protocol is stored.
Yes, that entry (incorrectly) specifies the icon.
Those two lines are pointless. They create two empty keys but only the ...\shell\open\command line is required to properly build that registry path.
Yes, you might have to change / to \, add: call set url=%url:/=\% to the command.
Using cmd.exe to parse untrusted input is not ideal, it would be better to write a custom application.
I'm having issues uploading .html files to our z/OS USS environment - specifically character set and code page conversions.
I telnet in with PuTTy, upload with WinSCP, and edit with Notepad++, but have the strange situation whereby I can have a USS shell session going and can CAT and VI a file and if it looks OK, it'll be displayed by the z/OS web server OK, but then if I try to edit it from WinSCP, I just get garbage:
LZÄÖÃãè×Å#ˆ£”“nLˆ£”“#n##Lˆ…„n####L”…£#ƒˆ™¢…£~äãÆ`øn####L£‰£“…nÃÉÃâ#
similarly, if it looks OK in Notepad++ then it'll look like garbage when served as a web page.
I have the text file traansfer options on and if I list the files in aa shell session I get:
t UTF-8 T=on -rw-r--r-- 1 JOCS065 JOCS2 9824 Jul 30 14:45 JS_Graphviz.html
t UTF-8 T=on -rw-r--r-- 1 JOCS065 JOCS2 29370 Jul 30 14:15 JS_Graphviz_new.html
JS_Graphviz.html won't show as a webpage but JS_Graphviz_new.html will.
Both have <meta charset="utf-8">, notepad++ shows both as ANSI.
Oddly, if I take the good code from Notepad++, and then edit the same file in vi via my shell session, delete everything and paste in the code I copied from Notepad++, it can then be served by the web server (and look like garbage in NPP etc).
So there's obviously some hidden flag or setting for the code page or character set. Does anyone have a rock-solid editing solution for text files in USS on z/OS?
EDIT
Screenshot showing errors with JavaScript files
I'm going to go ahead and answer, though this could be wrong. This is known SCP behavior. How can I convince z/OS scp to transfer binary files?. If you connect via FTP or FTPS, you should get the behavior you expect. Or you can try using code tagging, although that's beyond my scope of knowledge. See https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.1.0/com.ibm.zos.v2r1.bpxa400/bpxug294.htm.
In general my experience is that when environment variable _BPXK_AUTOCVT=ON you'll get auto conversion under the covers. See this article to get better insight on how this process works and adjust for your workflow:
_BPXK_AUTOCVT
Used when enabling automatic conversion of tagged files. When set, this variable overrides the AUTOCVT setting in BPXPRMxx.
For fork (BPX1FRK/BPX4FRK), spawn (BPX1SPn/BPX4SPN), exec
(BPX1EXC/BPX4EXC), and pthread_create (BPX1PTC/BPX4PTC), _BPXK_AUTOCVT
is propagated from the parent to the child. For pthread_create, the
parent is the Initial Program Task (IPT).
ON
Activates the automatic file conversion of tagged files. This
option affects conversion for I/O for regular, pipe, and
character-special files that are tagged.
OFF
Deactivates the automatic file conversion of tagged files. OFF is the default.
ALL
Activates the
automatic conversion of tagged files that are supported by Unicode
Services. This option affects conversion for I/O for regular and pipe
files that are tagged. Setting or unsetting ALL has no effect after
translation for a file begins. If the conversion is between EBCDIC and
ASCII, this option also affects conversion for I/O for character
special files.
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.3.0/com.ibm.zos.v2r3.bpxb200/bpxkenv.htm
I am a plone Newbie and I needed to change a translated word in the .po file, the translation is in Arabic. When I changes the word to the right word and restarted the zope. My plone site is no more reading the Arabic translations from this file and displays question marks instead.
When I searched I found that I must do some synchronization with the .pot file (translation catalog) but I think this is not the actual problem. Any clue?
You may have saved the file with the wrong encoding. Try saving the file as UTF-8.
I seems that you edit the file with a normal text editor. That can corrupt the files charset. To avoid this, you can use a free translation program such as poedit. Thoose programs normally take care of the charset. Use google with "translate .po YOURPLATTFORM". There are tons of free tools.
With this tools you can import .pot files also and (re)generate .po files from them.
You just have to restart plone, when you're done, editing the file.
If you have a good editor, you can try guessing the charset and correct it manually. But this often leads to the described problem and holds the risk that you may forgot to safe with the correct charset in a later edit process.
When I open a file in eclipse it shows with the improper line spacing showing an extra line break between every line. When I open the file with notepad or wordpad it doesn't show these extra line breaks that only eclipse shows. How do I get eclipse to read these files like notepad and wordpad without those line breaks?
-edit: I don't have this problem with all files but only a select few where I have made local changes > uploaded them to our sun station > then pulled those files back to my local workstation for future modifications.
Eclipse should have a File -> Convert Line Delimiters To... option that may correct this for you. (If it doesn't work on your file, this article may help.)
Really, though, you should have your file transfer program treat your source files as ascii instead of binary. Then your line ending problem should be moot.
It's possible that the server (or something in-between) is replacing all your CR+LF with CR LF (separate)?
Try specifically setting the Text File Encoding (Window->Preferences->General->Workspace), or alternatively use File->Convert Line Delimiters To->Windows every time you get the latest version (I know, not ideal).
It turns out that the problem was solved by doing my ftp in binary only, and setting the Eclipse encoding to US-ASCII. I don't fully understand why this fixed the problem but it worked. Thanks for the 2 answers they both lead me to my solution.