Handling Encoding and Decoding using predefined functions is Freemarker Templating language? - freemarker

Is there a way to handle decoding and encoding using predefined functions in Freemarker Templating Language?
I am trying to encode a string to UTF-8, in Freemarker Templating Language (FTL), are there any predefined functions to do this? From my basic investigation I found there are no functions so far.

You are looking for URL encoding (aka. percent encoding): ${foo?url}. The charset used depends on the url_escaping_charset configuration setting of FreeMarker, so you should set that to UTF-8. (You can also specify the charset like ${foo?url('UTF-8')}, but of course setting this in the configuration is better.)
In the documentation: http://freemarker.org/docs/ref_builtins_string.html#ref_builtin_url

Related

Ruby internal and external encoding

I have gone through various material and unable to find
the difference between default internal encoding and external encoding in ruby. Can anyone help me in this regard.
When reading strings from external sources (such as files, network sockets, ...) Ruby may assume that this data is encoded in a specific string encoding. This is the external encoding. For example, if you are reading text files and known that they are encoded in UTF-8, you may set the external encoding to UTF-8 to hint to Ruby that the data is supposed to be UTF-8 encododed.
Now, when reading the data, Ruby can also convert the data to a different encoding which might be more useful for use with your program. For example, if you are assembling data from different sources such as files you read and an HTTP request, it's often useful if you can make sure that your string all have the same encoding regardless of their source.
For this, you can set the internal encoding. If you set the correct external encoding for your data source and e.g. your internal encoding to UTF-8, you can be fairly sure that all your strings (regardless of where they come from) are correctly UTF-8 encoded Strings and can be manipulated, merged and changed at will without worrying about encoding issues deep in your business logic.

Liquid Template encoding issue with spring boot version 2.0.5

I am using Spring Boot version 2.0.5. and liquid template version 0.7.8
My problem is when I am using German text in the template file and when sending mail then few German characters converted into ? mark.
So what is the solution for this?
Somewhere along the path from the text file template, through processing and sending out as an email the character encoding is being mangled, so that the German characters, encoded in one scheme, are being incorrectly rendered as the wrong "glyph" in the other scheme, in the email.
The first things to check are what the encoding is for the template file. Then investigate how the email is being rendered. For example if it is an HTML email see if there is a character encoding reference in the header with a different encoding, e.g.:
<head><meta charset="utf-8" /></head>
If this differs from the encoding of the file, e.g. ISO-8859-1, then the first thing I would try is to resave the template in UTF-8, you should be able to do that within most IDEs or advanced text editors such as Notepad++
(As the glyphs are question marks it may be that the template is UTF-8 or UTF-16 and the HTML is in a more limited charset.)
If that doesn't work then you may need to look at your code and pay attention to how the raw bytes from the template are converted to Strings. For example:
String template = new String(bytesFromFile);
Would use the system default Charset, which might be different from the file. The safe way to convert the bytes to the String is to specify the character set:
String template = new String(bytesFromFile, "UTF-8");

What's the default encoding for System.IO.File.ReadAllText

if we don't mention the decoding what decoding will they use?
I do not think it's System.Text.Encoding.Default. Things work well if I EXPLICITLY put System.Text.Encoding.Default but things go wrong when I live that empty.
So this doesn't work well
Dim b = System.IO.File.ReadAllText("test.txt")
System.IO.File.WriteAllText("test4.txt", b)
but this works well
Dim b = System.IO.File.ReadAllText("test.txt", System.Text.Encoding.Default)
System.IO.File.WriteAllText("test4.txt", b, System.Text.Encoding.Default)
If we do not specify encoding will vb.net try to figure out the encoding from the text file?
Also what is System.Text.Encoding.Default?
It's the system default. What is my system default and how can I change it?
How do I know encoding used in a text file?
If I create a new text file and open it with scite I see that the encoding is code page property. What is code page property?
Look here, "This method attempts to automatically detect the encoding of a file based on the presence of byte order marks. Encoding formats UTF-8 and UTF-32 (both big-endian and little-endian) can be detected."
see also http://msdn.microsoft.com/en-us/library/ms143375(v=vs.110).aspx
This method uses UTF-8 encoding without a Byte-Order Mark (BOM)

JSON encoding issue with Ruby 1.9 and HTTParty

I've created a WebAPI that returns JSON.
The initial data is as follow (UTF-8 encoded):
#text="Rosenborg har ikke h\xC3\xB8rt hva Steffen"
Then with a .to_json on my object, here is what is sent by the API (I think it is ISO-8859-1 encoding) :
"text":"Rosenborg har ikke h\ufffd\ufffdrt hva Steffen"
I'm using HTTParty on the client side, and that's what I finally get :
"text":"Rosenborg har ikke h��rt hva"
Both WebAPI and client app are using Ruby 1.9.2 and Rails 3.
I'm a bit lost with this encoding issue... I tried to add the utf8 encoding header to my ruby files but it didn't changed anything.
I guess that I'm missing an encoding / decoding part somewhere... anyone has an idea?
Thank you very much !!!
Vincent
In Ruby 1.9, encoding is explicit now. However, Rails may or may not be configured to send the responses in the encoding you expect. You'll have to set the global configuration setting:
Encoding.default_external = "utf-8".
I believe the encoding that Ruby specifies by default for serialization is the platform default. In America on Windows that would be CodePage-1251. Other countries would have an alternate encoding.
Edit: Also see this url if the json is executed against MySQL: https://rails.lighthouseapp.com/projects/8994/tickets/5210-encoding-problem-in-json-format-response
Edit 2: Rails core and its suite of libraries (ActiveRecord, et. al.) will respect the Encoding.default_external configuration setting which encodes all the values it sends. Unfortunately, because encoding is a relatively new concept to Ruby not every 3rd party library has been adjusted for proper encoding. The ones that have may require additional configuration settings for those libraries. This includes MySQL, and the RSolr library you were using.
In all versions of Ruby before the 1.9 series, a string was just an array of bytes. When you've been thinking like that for so long, it's hard to wrap your head around the concept of multiple string encodings. The thing that is even more confusing now is that unlike Java, C#, and other languages that use some form of UTF as the native string format, Ruby allows each string to be encoded differently. In retrospect, that might be a mistake, but at least now they are respecting encoding.
The Encoding.force_encoding method is designed to treat the byte sequence with that new encoding, but does not change any of the underlying data. So it is possible to have invalid byte sequences. There is another method called .encode() that will transform the bytes from one encoding to another and guarantees valid byte sequences. For more information read this:
http://blog.grayproductions.net/articles/ruby_19s_string
Ok, I finally found out what the problem is...
I'm using RSolr to get my data from Solr, and by default encoding for all results is unfortunately 'US-ASCII' as mentioned here (and checked by myself) :
http://groups.google.com/group/rsolr/browse_thread/thread/2d4890fa7737e7ef#
So you need to force encoding as follow :
my_string.force_encoding(Encoding::UTF_8)
There is maybe a nice encoding option to provide to RSolr!

Create own encoding

How can I create my own encoding in Ruby (1.9)? The encoding would be for converting string while reading/writing from/for a file, i.e. generally for manipulating data in nonstandard encoded strings ( http://en.wikipedia.org/wiki/Mazovia_encoding )
To your updated question: At the moment all you can do is write some custom code which handles file reading/writing at byte level and does the needed conversions.
If you refer to how you can use different character encodings in ruby with version 1.9 I point you to
Working with Encodings in Ruby 1.9 and
Understanding M17n
I couldn't find any references in the ruby-docs about using proprietary encodings, and the Encoding class doesn't have any initializers (but Encoding.find() can load some of the encodings IConv supports dynamically) Unfortunately afaik Mazovia is unsupported even in iconv, so you're stuck with implementing your own class...

Resources