I have an article which will have more than one translation, English and Mandarin to start with. I can use either two documents and have one link the other, or I can have one after the other and add a link to each at the start.
Is there a more elegant way of handling this? e.g. could I choose the language at the start and my choice inline?
I do not believe there is any other way to do this, at least I know there isn't within asciidoctor (which is what hubpress is using). I do not know if hubpress has anything inpla
Related
This may be a stupid question, but here goes.
I've seen several projects using some translation library (e.g. gettext) working with plain english placeholders. So for example:
_("Please enter your name");
instead of abstract placeholders (which has always been my instinctive preference)
_("error_please_enter_name");
I have seen various recommendations on SO to work with the former method, but I don't understand why. What I don't get is what do you do if you need to change the english wording? Because if the actual text is used as the key for all existing translations, you would have to edit all the translations, too, and change each key. Or don't you?
Isn't that awfully cumbersome? Why is this the industry standard?
It's definitely not proper normalization to do it this way. Are there massive advantages to this method that I'm not seeing?
Yes, you have to alter the existing translation files, and that is a good thing.
If you change the English wording, the translations probably need to change, too. Even if they don't, you need someone who speaks the other language to check.
You prep a new version, and part of the QA process is checking the translations. If the English wording changed and nobody checked the translation, it'll stick out like a sore thumb and it'll get fixed.
The main language is already existent: you don't need to translate it.
Translators have better context with a real sentence than vague placeholders.
The placeholders are just the keys, it's still possible to change the original language by creating a translation for it. Because when the translation doesn't exists, it uses the placeholder as the translated text.
We've been using abstract placeholders for a while and it was pretty annoying having to write everything twice when creating a new function. When English is the placeholder, you just write the code in English, you have meaningful output from the start and don't have to think about naming placeholders.
So my reason would be less work for the developers.
I like your second approach. When translating texts you always have the problem of homonyms. Like 'open' can mean a state of a window but also the verb to perform the action. In other languages these homonyms may not exist. That's why you should be able to add meaning to your placeholders. Best approach is to put this meaning in your text library. If this is not possible on the platform the framework you use, it might be a good idea to define a 'development language'. This language will add meaning to the text entries like: 'action_open' and 'state_open'. you will off course have to put extra effort i translating this language to plain english (or the language you develop for). I have put this philosophy in some large projects and in the long run this saves some time (and headaches).
The best way in my opinion is keeping meaning separate so if you develop your own translation library or the one you use supports it you can do something like this:
_(i18n("Please enter your name", "error_please_enter_name"));
Where:
i18n(text, meaning)
Interesting question. I assume the main reason is that you don't have to care about translation or localization files during development as the main language is in the code itself.
Well it probably is just that it's easier to read, and so easier to translate. I'm of the opinion that your way is best for scalability, but it does just require that extra bit of effort, which some developers might not consider worth it... and for some projects, it probably isn't.
There's a fallback hierarchy, from most specific locale to the unlocalised version in the source code.
So French in France might have the following fallback route:
fr_FR
fr
Unlocalised. Source code.
As a result, having proper English sentences in the source code ensures that if a particular translation is not provided for in step (1) or (2), you will at least get a proper understandable sentence than random programmer garbage like “error_file_not_found”.
Plus, what do you do if it is a format string: “Sorry but the %s does not exist” ? Worse still: “Written %s entries to %s, total size: %d” ?
Quite old question but one additional reason I haven't seen in the answers yet:
You could end up with more placeholders than necessary, thus more work for translators and possible inconsistent translations. However, good editors like Poedit or Gtranslator can probably help with that.
To stick with your example:
The text "Please enter your name" could appear in a different context in a different template (that the developer is most likely not aware of and shouldn't need to be). E.g. it could be used not as an error but as a prompt like a placeholder of an input field.
If you use
_("Please enter your name");
it would be reusable, the developer can be unaware of the already existing key for an error message and would just use the same text intuitively.
However, if you used
_("error_please_enter_name");
in a previous template, developers wouldn't necessarily be aware of it and would make up a second key (most likely according to a predefined wording scheme to not end up in complete chaos), e.g.
_("prompt_please_enter_name");
which then has to be translated again.
So I think that doesn't scale very well. A pre-agreed wording scheme of suffixes/prefixes e.g. for contexts can never be as precise as the text itself I think (either too verbose or too general, beforehand you don't know and afterwards it's difficult to change) and is more work for the developer that's not worth it IMHO.
Does anybody agree/disagree?
Are there any chat filters that works depending on the context? I'm talking about the use of new technologies like Artificial Intelligence and Natural Language Processing to determine for example if a word was rude or not, depending on the context.
The simplest way is to just use a regex for the handful of most offensive words.
There are services offered online that will allow you to query with a word to see if it's profane. Those are good options if you want to be really sure.
Unfortunately, there's no sure bet for any of these. One man's profanity is another's common talk (see santorum. very very vulgar, but most aren't offended.) And each group has their own fowl language. ballox isn't so bad in America, but it's fairly bad in Britain.
You could make a clbuttic mistake. Even if you eliminate everything, I can still write a horribly offensive story using the kindest language.
A short black list or one of the services is the way to go depending on the level of filtering you want.
This looks interesting http://pottymouthfilter.com, It is unfortunately commercial product but at least someone is working on something along those lines.
I'm looking for a library or technique to detect the input language of blocks of text provided by users. Online lookups (like Google translate) won't work for this task as I'm writing an app which must run offline.
Thanks.
Here are two more n-gram-based gems you might want to try. They work offline.
https://github.com/echen/unsupervised-language-identification, optimized for separating english and other languages (has a live demo)
https://github.com/feedbackmine/language_detector, less specialized, will detect more languages. Some languages may need some extra training — I found it to be not precise enough for German text.
For anyone interested, I've found http://rubygems.org/gems/kenwaln-whatlanguage, which is performing excellently.
I'm using CLD which I really like, succinct and easy to use. Give it a try.
A quick demo of WhatLanguage in Ruby:
http://www.youtube.com/watch?v=lNqZ2cqOReo&list=UUJ_3fstMOH-g4yBxtvgAWkw&index=0&feature=plcp
I have a list of airport names and my users have the possibility to enter one airport name to select it for futher processing.
How would you handle misspelled names and present a list of suggestions?
Look up Levenshtein distances to match a correct name against a given user input.
http://norvig.com/spell-correct.html
does something like levenshtein but, because he doesnt go all the way, its more efficient
Employ spell check in your code. The list of words should contain only correct spellings of airports.
This is not a great way to do this. You should either go for a control that provides auto complete option or a drop down as someone else suggested.
Use AJAX if your technology supports.
I know its not what you asked, but if this is an application where getting the right airport is important (e.g. booking tickets) then you might want to have a confirmation stage to make sure you have the right one. There have been cases of people getting tickets for the wrong Sydney, for instance.
It may be better to let the user select from the list of airport names instead of letting them type in their own. No mistakes can be made that way.
While it won't help right away, you could keep track of typos, and see which name they finally enter when a correct name is entered. That way you can track most common typos, and offer the best options.
Adding to Kevin's suggestion, it might be a best of both worlds if you use an input box with javascript autocomplete. such as jquery autocomplete
edit: danish beat me :(
There may be an existing spell-check library you can use. The code to do this sort of thing well is non-trivial. If you do want to write this yourself, you might want to look at dictionary trie's.
One method that may work is to just generate a huge list of possible error words and their corrections (here's an implementation in Python), which you could cache for greater performance.
I want in an application with a simple text input, enriched with some marks to include formatting or semantic labeling. I want the syntax as easy as possible and I want to include self-defined labels.
Example:
[bold]Stackoverflow[/bold] is a [tag]good[/tag] resource for programmers.
Tables would be needed too.
HTML/XML and LaTeX are mighty enough to allow this, but too complicated. Wiki-Syntax seems simple, but uses another symbol for each markup, has unclear quoting and every Wiki seems to have another syntax. For tables and similar stuff Wiki becomes very complicated.
Exists a language/syntax, that matches my needs or can be slightly changed to do so? Or do I have to invent something myself? In that case, do you have suggestions?
Definitely do NOT invent your own. There are plenty of simple markup languages already, and users HATE learning new ones. Trust me on this!
I would suggest using one of the following:
Textile
Markdown
BBCode
Make your decision based on your userbase, as well as what tools and parsers are available in your chosen language. For my site, we went with Textile, but I've found that BBCode tends to be the language that most people already know. However, this will vary with different user demographics.
StackOverflow, along with several other sites, uses Markdown. I think it will give you the best balance between features and simplicity.
Let me add ReStructuredText to the list.
An additional benefit of using it is given by the availability of ReStructuredText to Anything service that makes extremely easy to create HTML or PDF versions of the document.
As already pointed out there are a lot of lightweight markup languages (many are listed here: wikipedia article), there should be no need of creating your own.