Is it wrong to put all Codeigniter translations into one translation file - codeigniter

Looking fro some advise...
I'm creating a multi-language site with Codeigniter. CI allows me to create several language files, e.g. one per controller and load language files whenever I need them.
For me it sounds easier to just work with one language file and auto-load it, but this approach doesn't seem to be encouraged. Can anyone tell me if working with one language file (per language) is OK, or should I use a language file per controller ?

It depends on the size of your file, if size of your single file is too big then for every time you load the file all data for that file will get loaded and your script will take much more memory, in case of big language file it is always to use multiple files and load it when needed, and if your language file is small it is always better to use single file so that you don't need to manage it and simple to use.

Related

Single or multiple translation.json files for i18n?

I'm working a project in Aurelia and using the aurelia-i18n plugin. So far it looks great and translation is working and instantly updating interface language when I change locale.
Question: is there a logical, organizational or performance advantage to using multiple translation files vs. a single translation file? For instance:
Should I just put everything into one file?
my-aurelia/locales/en/translation.json
my-aurelia/locales/es/translation.json
Or should I separate into multiple translation files?
my-aurelia/locales/en/nav.json
my-aurelia/locales/en/words.json
my-aurelia/locales/en/phrases.json
my-aurelia/locales/es/nav.json
my-aurelia/locales/es/words.json
my-aurelia/locales/es/phrases.json
Here's how I have instantiated the plugin for this example (inside the export function configure(aurelia) { of my-aurelia/src/main.js, but I'm at an important design crossroads.
aurelia.use.plugin('aurelia-i18n', (instance) => {
// register backend plugin
instance.i18next.use(XHR);
// adapt options to your needs (see http://i18next.com/docs/options/)
instance.setup({
backend: {
loadPath: '/locales/{{lng}}/{{ns}}.json',
},
lng : 'es',
ns: ['words','phrases','nav'],
defaultNS: 'words',
attributes : ['t','i18n'],
fallbackLng : 'en',
debug : false
});
});
One json language file or multiple json language files? Any additional advice?
Performance-wise, a single file per language will be slightly faster on the initial load because fewer requests are necessary. However, this micro-optimization will be negligible, and you should put more value towards code structure and readability, especially for other people working on the code after you.
Will a single file become so large, that it will be hard for people to find the right entry, and change the content of the JSON file? If not, and you do not expect it to grow to such a size, you're probably best off using a single file.
Will people wonder if you put "Gracias/Thank You" in words (thanks) or phrases (thank you)? I recommend using a structure which is clear for someone who is not familiar with your code.
Lastly, one of the organization structures I have not seen you mention, but which I have used myself, is to order i18n files based on your views. This makes it easy to find the file which needs to be changed, as you already know which view you're working so you don't have to look for the i18n.

Rules for file extensions?

Are there any rules for file extensions? For example, I wrote some code which reads and writes a byte pattern that is only understood by that specific programm. I'm assuming my anti virus programm won't be too happy if I give it the name "pleasetrustme.exe"... Is it gerally allowed to use those extensions? And what about the lesser known ones, like ".arw"?
You can use any file extension you want (or none at all). Using standard extensions that reflect the actual type of the file just makes things more convenient. On Windows, file extensions control stuff like how the files are displayed in Windows Explorer and what happens when you double click on it.
I wrote some code which reads and writes a byte pattern that is only
understood by that specific programm.
A file extension is only an indication of what type of data will be inside, never a guarantee that certain data formatted in a specific way will be inside the file.
For your own specific data structure it is of course always best to choose an extension that is not already in use for other file formats (or use a general extension like .dat or .bin maybe). This also has the advantage of being able to use an own icon without it being overwritten by other software using the same extension - or the other way around.
But maybe even more important when creating a custom (binary?) file format, is to provide a magic number as the first bytes of that file, maybe followed by a file header structure containing a version number etc. That way your own software can first check the header data to make sure it's the right type and version (for example: anyone could rename any file type to your extension, so your program needs to have a way to do some checks inside the file before reading the remaining data).

Including Files in Ruby Questions

I am very new to Ruby so could you please suggest the best practice to separating files and including them.
What is the preferred design structure of the file layout. When do you decide to separate the algorithm into a new file?
When do you use load to include other files and when do you use require?
And is there a performance hit when you include files?
Thanks.
I make one file per class, except classes that are small helper classes, not needed by other files. I separate my different modules in subdirectories also.
The difference between load and require is require will only load the file once, even if it's called multiple times, while load will load it again regardless of whether it's been loaded before. You'll almost always want to use require, except maybe in irb when you want to manually want to reload a file.
I'm not sure on the performance hit. When you load or require a file, the interpreter has to interpret the file. Most Ruby's will compile it to virtual machine code after being required. Obviously, require is more performant when the file may have already been included once, because it may not have to load it again.

How do you manage the String Translation Process?

I am working on a Software Project that needs to be translated into 30 languages. This means that changing any string incurs into a relatively high cost. Additionally, translation does not happen overnight, because the translation package needs to be worked by different translators, so this might take a while.
Adding new features is cumbersome somehow. We can think up all the Strings that will be needed before we actually code the UI, but sometimes still we need to add new strings because of bug fixes or because of an oversight.
So the question is, how do you manage all this process? Any tips in how to ease the impact of translation in the software project? How to rule the strings, instead of having the strings rule you?
EDIT: We are using Java and all Strings are internationalized using Resource Bundles, so the problem is not the internationalization per-se, but the management of the strings.
I'm not sure the platform you're internationalizing in. I've written an answer before on the best way to il8n an application. See What do I need to know to globalize an asp.net application?
That said - managing the translations themselves is hard. The problem is that you'll be using the same piece of text across multiple pages. Your framework may not, however, support only having that piece of text in one file (resource files in asp.net, for instance, encourage you to have one resource file per language).
The way that we found to work with things was to have a central database repository of translations. We created a small .net application to import translations from resource files into that database and to export translations from that database to resource files. There is, thus, an additional step in the build process to build the resource files.
The other issue you're going to have is passing translations to your translation vendor and back. There are a couple ways for this - see if your translation vendor is willing to accept XML files and return properly formatted XML files. This is, really, one of the best ways, since it allows you to automate your import and export of translation files. Another alternative, if your vendor allows it, is to create a website to allow them to edit the translations.
In the end, your answer for translations will be the same for any other process that requires repetition and manual work. Automate, automate, automate. Automate every single thing that you can. Copy and paste is not your friend in this scenario.
Pootle is an webapp that allows to manage translation process over the web.
There are a number of major issues that need to be considered when internationalizing an application.
Not all strings are created equally. Depending upon the language, the length of a sentence can change significantly. In some languages, it can be half as long and in others it can be triple the length. Make sure to design your GUI widgets with enough space to handle strings that are larger than your English strings.
Translators are typically not programmers. Do not expect the translators to be able to read and maintain the correct file formats for resource files. You should setup a mechanism where you can transform the translated data round trip to your resource files from something like an spreadsheet. One possibility is to use XSL filters with Open Office, so that you can save to Resource files directly in a spreadsheet application. Also, translators or translation service companies may already have their own databases, so it is good to ask about what they use and write some tools to automate.
You will need to append data to strings - don't pretend that you will never have to or you will always be able to put the string at the end. Make sure that you have a string formatter setup for replacing placeholders in strings. Furthermore, make sure to document what are typical values that will be replaced for the translators. Remember, the order of the placeholders may change in different languages.
Name your i8n string variables something that reflects their meaning. Do you really want to be looking up numbers in a resource file to find out what is the contents of a given string. Developers depend on being able to read the string output in code for efficiency a lot more than they often realize.
Don't be afraid of code-generation. In my current project, I have written a small Java program that is called by ant that parses all of the keys of the default language (master) resource file and then maps the key to a constant defined in my localization class. See below. The lines in between the //---- comments is auto-generated. I run the generator every time I add a string.
public final class l7d {
...normal junk
/**
* Reference to the localized strings resource bundle.
*/
public static final ResourceBundle l7dBundle =
ResourceBundle.getBundle(BUNDLE_PATH);
//---- start l7d fields ----\
public static final String ERROR_AuthenticationException;
public static final String ERROR_cannot_find_algorithm;
public static final String ERROR_invalid_context;
...many more
//---- end l7d fields ----\
static {
//---- start setting l7d fields ----\
ERROR_AuthenticationException = l7dBundle.getString("ERROR_AuthenticationException");
ERROR_cannot_find_algorithm = l7dBundle.getString("ERROR_cannot_find_algorithm");
ERROR_invalid_context = l7dBundle.getString("ERROR_invalid_context");
...many more
//---- end setting l7d fields ----\
}
The approach above offers a few benefits.
Since your string key is now defined as a field, your IDE should support code completion for it. This will save you a lot of type. It get's really frustrating looking up every key name and fixing typos every time you want to print a string.
Someone please correct me if I am wrong. By loading all of the strings into memory at static instantiation (as in the example) will result in a quicker load time at the cost of additional memory usage. I have found the additional amount of memory used is negligible and worth the trade off.
The localised projects I've worked on had 'string freeze' dates. After this time, the only way strings were allowed to be changed was with permission from a very senior member of the project management team.
It isn't exactly a perfect solution, but it did enable us to put defects regarding strings on hold until the next release with a valid reason. Once the string freeze has occured you also have a valid reason to deny adding brand new features to the project on 'spur of the moment' decisions. And having the permission come from high up meant that middle managers would have no power to change specs on your :)
If available, use a database for this. Each string gets an id, and there is either a table for each language, or one table for all with the language in a column (depending on how the site is accessed the performance dictates which is better). This allows updates from translators without trying to manage code files and version control details. Further, it's almost trivial to run reports on what isn't translated, and keep track of what was an autotranslation (engine) vs a real human translation.
If no database, then I stick each language in a separate file so version control issues are reduced. But the structure is basically the same - each string has an id.
-Adam
Not only did we use a database instead of the vaunted resource files (I have never understood why people use something like that which is a pain to manage, when we have such good tools for dealing with databases), but we also avoided the need to tag things in the application (forgetting to tag controls with numbers in VB6 Forms was always a problem) by using reflection to identify the controls for translation. Then we use an XML file which translates the controls to the phrase IDs from the dictionary database.
Although the mapping file had to be managed, it could still be managed independent of the build process, and the translation of the application was actually possible by end-users who had rights in the database.
The solution we came up to so far is having a small application in Excel that reads all the property files, and then shows a matrix with all the translations (languages as headers, keys as rows). It is quite evident what is missing then. This is send to the translators. When it comes back, then the sheet can be processed to generate the same property bundles back again. So far it has eased the pain somewhat, but I wonder what else is around.
This google book - resource file management gives some good tips
You can use Resource File Management software to keep track of strings that have changed and control the workflow to get them translated - otherwise you end up in a mess of freezes and overbearing version control
Some tools that do this sort of thing - no connection an I haven't actually used them, just researching
http://www.sisulizer.com/
http://www.translationzone.com/en/products/
I put in a makefile target that finds all the .properties files and puts them in a zip file to send off to the translators. I offered to send them just diffs, but for some reason they want the whole bundle of files each time. I think they have their own system for tracking just differences, because they charge us based on how many strings have changed from one time to the next. When I get their delivery back, I manually diff all their files with the previous delivery to see if anything unexpected has changed - one time all the PT_BR (Brazillian Portuguese) strings changed, and it turns out they'd used a PT_PT (Portuguese Portuguese) translator for that batch in spite of the order for PT_BR.
In Java, internationalization is accomplished by moving the strings to resource bundles ... the translation process is still long and arduous, but at least it's separated from the process of producing the software, releasing service packs etc. One thing that helps is to have a CI system that repackages everything any time changes are made. We can have a new version tested and out in a matter of minutes whether it's a code change, new language pack or both.
For starters, I'd use default strings in case a translation is missing. For example, the English or Spanish value.
Secondly, you might want to consider a web app or something similar for your translators to use. This requires some resources upfront, but at least you won't need to send files around and it will be obvious for the translators which strings are new, etc.

What are the best practices for building multi-lingual applications on win32?

I have to build a GUI application on Windows Mobile, and would like it to be able user to choose the language she wants, or application to choose the language automatically. I consider using multiple dlls containing just required resources.
1) What is the preferred (default?) way to get the application choose the proper resource language automatically, without user intervention? Any samples?
2) What are my options to allow user / application control what language should it display?
3) If possible, how do I create a dll that would contain multiple language resources and then dynamically choose the language?
For #1, you can use the GetSystemDefaultLangID function to get the language identifier for the machine.
For #2, you could list languages you support and when the user selects one, write the selection into a text file or registry (is there a registry on Windows Mobile?). On startup, use the function in #1 only if there is no selection in the file or registry.
For #3, the way we do it is to have one resource DLL per language, each of which contains the same resource IDs. Once you figure out the language, load the DLL for that language and the rest just works.
Re 1: The previous GetSystemDefuaultLangID suggestion is a good one.
Re 2: You can ask as a first step in your installation. Or you can package different installers for each language.
Re 3:
In theory the DLL method mentioned above sounds great, however in practice it didn't work very well at all for me personally.
A better method is to surround all of the strings in your program with either: Localize or NoLocalize.
MessageBox(Localize("Hello"), Localize("Title"), MB_OK);
RegOpenKey(NoLocalize("\\SOFTWARE\\RegKey"), ...);
Localize is just a function that converts your english text to a the selected language. NoLocalize does nothing.
You want to surround your strings with these values though because you can build a couple of useful scripts in your scripting language of choice.
1) A script that searches for all the Localize(" prefixes and outputs a .ini file with english=otherlangauge name value pairs. If the output .ini file already contains a mapping you don't add it again. You never re-create the ini file completely, your script just adds the missing ones each time you run your script.
2) A script that searches all the strings and makes sure they are surrounded by either Localize(" or NoLocalize(". If not it tells you which strings you still need to localize.
The reason #2 is important is because you need to make sure all of your strings are actually consciously marked as needing localization or not. Otherwise it is absolutely impossible to make sure you have proper localization.
The reason for #1 instead of loading from a DLL is because it takes no work to maintain this solution and you can add new strings that need to be translated on the fly.
You ship the ini files that are output with your program. You also give these ini files to your translators so they can convert the english=otherlanguage pairs. When they send it back to you, you simply replace your checked in .ini file with the one given by your translator. Running your script as mentioned in #1 will re-add any missing translations if any were done while the translator was translating.

Resources