I have a (binary)reader, to read DBC files (Its a file format used by a game) into structures, that I know. Every DBC file differs a little bit (their structs), from the others, and their reading method differs too. (there is spell.dbc item.dbc map.dbc etc...)
So I made unique reading methods for all the DBC files I want to read. (I think its not the best solution, but for now, its okey for me.)
Here Is an example usage:
DBCReader reader = new DBCReader(DBCFile.Spell); // you can use DBCFile.Map or others here
Now my question : Is that possible to list only the spell related methods, when I use my reader?
So my DBCReader class contains different reading methods, for all the dbc files,and I would like to only see the spell related ones.
So now when I write "reader." to C#, it lists all my methods for all dbc files, like
ReadMapName() (which is for Map.dbc),
ReadSpellID() (which is for
Spell.dbc), GetItemName() (which is
for Item.dbc) etc..
But I only want the spell related methods to be listed. Is it possible? :)
Thanks.
I'm not aware of a way to get intellisense to display a subset of the methods in a class.
You could make seperate classes for each type, ie:
dbcSpellreader, dbcMapReader, etc.
Related
I want to make the editor experience better and more visually pleasing when filling in content on a page (In all properties view). Could be a simple divider or a heading..
I am already using tabs, whenever it makes sense. Also, I have been experimenting with using blocks as properties. This adds a nice separation with at clear heading, but it is so much more code to maintain and a bit of a mess to be honest when the properties truly belong to the page type.
Out-of-the-box, it is not possible to decorate properties with headlines, unless you use block-properties, as you mention yourself.
However, I thought your question was quite interesting, and I discovered that extending Episerver to accommodate this behavior is surprisingly easy. I have written an example solution, which you're free to use as you like: https://arlc.dk/grouping-properties-with-headlines-without-property-blocks.
If you dislike the solution, an alternative approach would be to introduce your own Property-type (Headline), and create a 1) a custom dojo-widget to simply display a headline, and 2) an EditorDescriptor to set the ClientEditingClass.
Linus wrote an excellent blog post on this here: https://world.episerver.com/blogs/Linus-Ekstrom/Dates/2012/7/Creating-a-custom-editor-for-a-property/.
EDIT:
I see, I have skipped too quickly over the overriding part.
You don't have to override any files by replacing them, and you won't have to extract Shell.zip (unless you're curious how Episerver has implemented their widgets). The part that overrides the specific component is define("epi/shell/form/Field". As long as your definition of this widget is loaded after shell, dojo will use your implementation, whenever something is requiring "epi/shell/form/Field". The thing that ensures your implementation is loaded after, is in module.config, under 'This injects our field-implementation [...]'.
The path ~/ClientResources/Scripts/Shell/Field/Field.js is simply the location I have chosen to put the overriden version of Field.js. You can put it wherever you like, as long as you update module.config accordingly, with the new path.
It works like this: First, Episerver defines widget A. Then you define a widget with the same name, A. When anything tries to fetch A, it returns your implementation, rather than Episerver's.
I am trying to create a new GUI element within DrRacket's text window, like picts or syntax objects. As far as I can tell, the most standard way of doing this is with a snip%.1
Unfortunately, the documentation for creating new snips, while comprehensive, is a bit impenetrable and leaves some questions to be answered.
For starters, what is the difference between a snip% and a snip-class%? Why do these need to be separated out into two classes, rather than simply being combined into one class? Is it because multiple snips will use one snip class?
Second off, what is snip-reader<%>? Not only why does it need to be a separate class, but why is the module providing it supposed to be installed?2 If it does need to be a new class, why can't it just be referred to directly. Why go through this whole process of constructing and then parsing a string of the form: "(lib ...)\n(lib ...)"?
I mean, there might now be any reason for this design, and it might just be a remnant of an old API. If so, has anyone thought of making a new more consistent API? Or if there is a reason for this design, can you please tell me what it is, as the docs don't seem to make that clear.
I mean, as of right now, I can copy/paste the sample given in the docs on creating a new snip. But I'm having a hard time understanding the design going on here, so I can use them properly.
1I know there are other ways to do it, but I also want to have interactive buttons and whatnot.
2I know it doesn't need to be installed as a library per se, but the documentation seems to strongly push you in that direction.
Okay, I think I finally found the answer. Broadly speaking:
The snip% class includes the methods for drawing the snip, telling the editor how much space to reserve for the picture, and handling events such as mouse clicks.
Next, the snip-class% class is used for encoding and decoding snips. This must be a separate class because when saved to a file, the editor needs to encode what type of snip it is, and for obvious reasons it can't just put the literal snip% class in there. The value it stores in the file is the snip-class%'s 'class name'. This can be anything, and as long as the editor has the classname associated to a snip-class%, it can be loaded. Additionally, if it is of the form "(lib ...)" or "(lib ...) (lib ...)" Racket will just automatically load it into the list for you.
Nothing 'needs' to be installed per se, its just the easiest way to go about it. Otherwise you manually need to tell the editor how to handle the snip before actually loading the file.
Is there a data structure within LiveCode that can be used as a "holder" for associated data, letting me handle it collectively? I come from a Java / Javascript / C background so I am looking for a Class or Struct sort of data structure.
I've found examples of Groups, which seem to have some of this functionality, but it feels a bit like I'm bending the language to meet my needs.
As a specific example, suppose I had an image field on my screen that would randomly display an image and, when pressed, play an associated sound clip. I'd expect to create a list of "structures" that contained the path to the image and the path to the associated sound clip, and use that data to populate the image field and to decide what sound clip to play.
Would a Group be the correct structure to use in this case? Or am I approaching this in a way that isn't really fitting with the way LiveCode works?
It takes a little getting used to, but the xTalk world is much simpler and more open than any ordinary procedural language. So much of what you once had to manage is no longer required.
So when splash21 said that you could store all your image and sound references in a custom property, he was really saying that the LiveCode environment contains intrinsic, high level functionality that makes these sorts of things instantly accessible, and the only thing required of you is to call for them, and they simply work.
The only way to appreciate this is to make a few simple programs, to really see what is possible. Make your application. Everything you mentioned can be accomplished with perhaps a dozen lines of code in a single handler. I recommend that you join the LiveCode use list and forums. The community is vibrant and eager to help, frequently with full blown solutions to specific problems, but more importantly, as guides and mentors to new users
Craig Newman
Arrays in LiveCode are actually associative arrays (like hash maps). A key is associated with a value. The value might be as well an array.
Chapter 5.5.7 of the User's Guide says
Array elements may contain nested or sub-elements, making them multi-dimensional.
This type of array is ideal for processing hierarchical data structures such as trees or
XML. To access a sub-element, simply declare it using an additional set of square
brackets.
put "ABC" into myVariable["myKeyName"][“aChildElement”]
see also
How to store pictures in a stack?
Dave- I'm hoping to get a struct-like container implemented in the near future. Meanwhile you can, as splash21 mentioned, use custom properties (or better yet, custom property sets) to do what you want. This will give you a pseudo-struct for each object and you can implement the file and sound specifications into the properties. And if you use that in conjunction with a behavior object you'll end up very close to a real inheritable class formation.
I'm starting to modify my app, which uses all hardcoded strings for errors, GUI, etc. I'm considering these two approaches, but let me know if there is an even better way:
-Put all string in ressource (.rc) files.
-define all strings in a file, once for each language. Use a preprocessor define to decide which strings get compiled in.
Which of these two approaches is generally prefered?
Put all the strings in resource files. Once you've done that, there's several good translation packages available. One useful thing these packages do is allow you to get translation done by somebody who doesn't program.
Remember, also, that internationalization (i18n) is a large subject, and there's a lot of things to consider. It isn't just a matter of translating strings. Do a web search on it, at the very least. You might want to read a book on it: I used International Programming for Windows by Schmitt as a guide. It's an old book from Microsoft Press, and I had to get it through a used book service; most of the more modern stuff seems to be on internationalizing .NET apps.
Without knowing more about your project (what sort of software, who the intended audience is, what sort of organization you have, what sort of budget, why you're interested in internationalization, etc.), this is about the most I can tell you.
Generally you see locale specific resource files containing strings referenced by key. Compiling different versions for different locales is a very rigid solution and will be a maintenance nightmare. Using resource files also allows the user to have fallback locales.
There's another approach of just putting strings in the source with somethign like tr(" ") and usign one of the tools that strips them out and converts them.
It works with any toolkit/GUI library.
You can mark text to be converted and text not to change (such as protocol strings or db keys).
It makes the source easier to read and search, isntead of having to lookup what IDS_MESSAGE34 means.
One problem with resource files, at least with Windows/MFC, is that you can't use the stringtable in dialogs. So you have some text in the stringtabel and some in the dialog section which you have to dela with separately.
I am working on a Software Project that needs to be translated into 30 languages. This means that changing any string incurs into a relatively high cost. Additionally, translation does not happen overnight, because the translation package needs to be worked by different translators, so this might take a while.
Adding new features is cumbersome somehow. We can think up all the Strings that will be needed before we actually code the UI, but sometimes still we need to add new strings because of bug fixes or because of an oversight.
So the question is, how do you manage all this process? Any tips in how to ease the impact of translation in the software project? How to rule the strings, instead of having the strings rule you?
EDIT: We are using Java and all Strings are internationalized using Resource Bundles, so the problem is not the internationalization per-se, but the management of the strings.
I'm not sure the platform you're internationalizing in. I've written an answer before on the best way to il8n an application. See What do I need to know to globalize an asp.net application?
That said - managing the translations themselves is hard. The problem is that you'll be using the same piece of text across multiple pages. Your framework may not, however, support only having that piece of text in one file (resource files in asp.net, for instance, encourage you to have one resource file per language).
The way that we found to work with things was to have a central database repository of translations. We created a small .net application to import translations from resource files into that database and to export translations from that database to resource files. There is, thus, an additional step in the build process to build the resource files.
The other issue you're going to have is passing translations to your translation vendor and back. There are a couple ways for this - see if your translation vendor is willing to accept XML files and return properly formatted XML files. This is, really, one of the best ways, since it allows you to automate your import and export of translation files. Another alternative, if your vendor allows it, is to create a website to allow them to edit the translations.
In the end, your answer for translations will be the same for any other process that requires repetition and manual work. Automate, automate, automate. Automate every single thing that you can. Copy and paste is not your friend in this scenario.
Pootle is an webapp that allows to manage translation process over the web.
There are a number of major issues that need to be considered when internationalizing an application.
Not all strings are created equally. Depending upon the language, the length of a sentence can change significantly. In some languages, it can be half as long and in others it can be triple the length. Make sure to design your GUI widgets with enough space to handle strings that are larger than your English strings.
Translators are typically not programmers. Do not expect the translators to be able to read and maintain the correct file formats for resource files. You should setup a mechanism where you can transform the translated data round trip to your resource files from something like an spreadsheet. One possibility is to use XSL filters with Open Office, so that you can save to Resource files directly in a spreadsheet application. Also, translators or translation service companies may already have their own databases, so it is good to ask about what they use and write some tools to automate.
You will need to append data to strings - don't pretend that you will never have to or you will always be able to put the string at the end. Make sure that you have a string formatter setup for replacing placeholders in strings. Furthermore, make sure to document what are typical values that will be replaced for the translators. Remember, the order of the placeholders may change in different languages.
Name your i8n string variables something that reflects their meaning. Do you really want to be looking up numbers in a resource file to find out what is the contents of a given string. Developers depend on being able to read the string output in code for efficiency a lot more than they often realize.
Don't be afraid of code-generation. In my current project, I have written a small Java program that is called by ant that parses all of the keys of the default language (master) resource file and then maps the key to a constant defined in my localization class. See below. The lines in between the //---- comments is auto-generated. I run the generator every time I add a string.
public final class l7d {
...normal junk
/**
* Reference to the localized strings resource bundle.
*/
public static final ResourceBundle l7dBundle =
ResourceBundle.getBundle(BUNDLE_PATH);
//---- start l7d fields ----\
public static final String ERROR_AuthenticationException;
public static final String ERROR_cannot_find_algorithm;
public static final String ERROR_invalid_context;
...many more
//---- end l7d fields ----\
static {
//---- start setting l7d fields ----\
ERROR_AuthenticationException = l7dBundle.getString("ERROR_AuthenticationException");
ERROR_cannot_find_algorithm = l7dBundle.getString("ERROR_cannot_find_algorithm");
ERROR_invalid_context = l7dBundle.getString("ERROR_invalid_context");
...many more
//---- end setting l7d fields ----\
}
The approach above offers a few benefits.
Since your string key is now defined as a field, your IDE should support code completion for it. This will save you a lot of type. It get's really frustrating looking up every key name and fixing typos every time you want to print a string.
Someone please correct me if I am wrong. By loading all of the strings into memory at static instantiation (as in the example) will result in a quicker load time at the cost of additional memory usage. I have found the additional amount of memory used is negligible and worth the trade off.
The localised projects I've worked on had 'string freeze' dates. After this time, the only way strings were allowed to be changed was with permission from a very senior member of the project management team.
It isn't exactly a perfect solution, but it did enable us to put defects regarding strings on hold until the next release with a valid reason. Once the string freeze has occured you also have a valid reason to deny adding brand new features to the project on 'spur of the moment' decisions. And having the permission come from high up meant that middle managers would have no power to change specs on your :)
If available, use a database for this. Each string gets an id, and there is either a table for each language, or one table for all with the language in a column (depending on how the site is accessed the performance dictates which is better). This allows updates from translators without trying to manage code files and version control details. Further, it's almost trivial to run reports on what isn't translated, and keep track of what was an autotranslation (engine) vs a real human translation.
If no database, then I stick each language in a separate file so version control issues are reduced. But the structure is basically the same - each string has an id.
-Adam
Not only did we use a database instead of the vaunted resource files (I have never understood why people use something like that which is a pain to manage, when we have such good tools for dealing with databases), but we also avoided the need to tag things in the application (forgetting to tag controls with numbers in VB6 Forms was always a problem) by using reflection to identify the controls for translation. Then we use an XML file which translates the controls to the phrase IDs from the dictionary database.
Although the mapping file had to be managed, it could still be managed independent of the build process, and the translation of the application was actually possible by end-users who had rights in the database.
The solution we came up to so far is having a small application in Excel that reads all the property files, and then shows a matrix with all the translations (languages as headers, keys as rows). It is quite evident what is missing then. This is send to the translators. When it comes back, then the sheet can be processed to generate the same property bundles back again. So far it has eased the pain somewhat, but I wonder what else is around.
This google book - resource file management gives some good tips
You can use Resource File Management software to keep track of strings that have changed and control the workflow to get them translated - otherwise you end up in a mess of freezes and overbearing version control
Some tools that do this sort of thing - no connection an I haven't actually used them, just researching
http://www.sisulizer.com/
http://www.translationzone.com/en/products/
I put in a makefile target that finds all the .properties files and puts them in a zip file to send off to the translators. I offered to send them just diffs, but for some reason they want the whole bundle of files each time. I think they have their own system for tracking just differences, because they charge us based on how many strings have changed from one time to the next. When I get their delivery back, I manually diff all their files with the previous delivery to see if anything unexpected has changed - one time all the PT_BR (Brazillian Portuguese) strings changed, and it turns out they'd used a PT_PT (Portuguese Portuguese) translator for that batch in spite of the order for PT_BR.
In Java, internationalization is accomplished by moving the strings to resource bundles ... the translation process is still long and arduous, but at least it's separated from the process of producing the software, releasing service packs etc. One thing that helps is to have a CI system that repackages everything any time changes are made. We can have a new version tested and out in a matter of minutes whether it's a code change, new language pack or both.
For starters, I'd use default strings in case a translation is missing. For example, the English or Spanish value.
Secondly, you might want to consider a web app or something similar for your translators to use. This requires some resources upfront, but at least you won't need to send files around and it will be obvious for the translators which strings are new, etc.