I have a localized (in English and in French) iPhone app on the App Store and there's something I'm wondering for a while without being able to get a response.
As you can see on the image below (from Xcode), English is set as the Development Language but not as the Base one so I'm wondering what happens for a user in Spain (with a phone in Spanish) or in Germany (in German), etc? What language did he sees on the App Store?
Maybe I'm freaking out for nothing! But English as the Base and not the Development Language would be more logic? Unfortunately, I can't try it myself, making my phone in Spain/Spanish because I still get the French App Store.
Thanks!
The localization setting has nothing to do with the AppStore. It's related to the end user's device (or app-specific) region. You can add your localized files for any localization you need (by selecting the file and adding to the desired localization from the right pannel), but the default localization will always be English.
So don't worry
I'm looking for the best approach to implement voice command in a Xamarin app.
Here are my requirements:
I don't need to launch my app by voice. Instead, my users will launch the app through touch (so, when the app is not running, no voice recognition is needed by my app)
My app is a client/server app and it will work always on (the backend will run on azure)
My app will be used primarily by car (so consider environment noise)
My app will work in many languages, such as Italian, Spanish, French and English
My app should be developed with xamarin (and eventually mvvmcross or similar)
In my app there will be two kinds of voice commands:
to select an item from a short list: app will show a list of items, such as "apple, kiwi, banana and strawberry" and user will have to say one of those words.
to change current view. Typically these voice commands will be something like "cancel", "confirm", "more" and stuff like these
The typical interaction between user, app and server should be this:
user says one of the available commands in current view/activity/page
suppose here that the user perfectly knows which commands he/she can use, it does no matter now how he/she knows these commands (he/she just knows them)
user could put before the commands some special words, such as "hey 'appname'", to have a command like "hey 'appname', confirm"
Note: the "hey 'appname'" part of the voice command has the only purpose to allow the app to know when the command starts. The app can be always in listening mode, but has to avoid to send the audio stream continuously to the server to recognize commands
best case is if app would recognize these commands locally, without involve the remote server, since the voice commands are predefined and well-known in each view. Anyway, app can send the audio wave to the server which will return a string (in this example the text returned will be "confirm", since the audio was "hey 'appname', confirm")
app will map the text recognized with the available commands, and will invoke the right one
user will receive a feedback by the app. The feedback could be:
voice feedback (text-to-speech)
visual feedback (something on the screen)
both above
I was looking for azure-cognitive-services, but in this case, as far as I've understood, there is no way to recognize the start of the command locally (everything works on server side through REST api or clients). So the user would have to press a button before every voice command, and I need to avoid this kind of interaction.
Since the app is running, my user has him/her hands on the steering wheel, and he/she can't touch everytime the display. Isn't it?
Moreover, I was looking for cortana-skills-kit and botframework, but:
It seems that Cortana Skills are available in English only
Actually, I don't need to involve Cortana to launch my app
I don't have experiences on these topics, so, hope that my question is clear and, generally speaking, that can be useful for other newbie users as well.
* UPDATE 1 *
The Speech Recognition with the Voice Command Definition (VCD) file is really close to what I'd need, because:
it has a way to activate the command through a command name shortcut
It works in foreground (and background as well, even if in my case I don't need the background)
Unfortunately, this service works only on Windows, since it uses the local API. Maybe the right approach could be based on the following considerations:
Every platform exposes a local speech recognition api (Cortana, Siri, Google Now)
Xamarin exposes Siri and Google Now apis and make them available through C#
It would be useful to create a facade component to expose the three different local speech api through a common interface
I'm wondering if there is something other solution to this. Cortana, as personal assistant, is available on Windows, iOS and Android. Since Cortana works both with local api and with remote service (Cortana Skills), is Cortana the right approach? Has Cortana the support for many languages (or, at least, has the support a road map)?
So, just some thoughts here. If you have some other ideas, or suggestions, please add here. Thanks
I've started Store listing experiment for en-IN English (India) language but I can see this experiment on Google Play although my device uses en-GB English (United Kingdom) locale.
Default Store listing language for my app is en-US English (United States) and en-GB is not listed.
Shouldn't be experiment available only for users with en-IN language and other English languages should fall back to default (en-US)? Can be this Store listing experiments distribution bug?
I've got response from Google Play Developer Support. It is Store listing bug and they are working on fix.
My Lenovo laptop has two task bar type programs that show the network status and battery status. I have been trying to search for what these types of widgits are called. Unfortuantly my google-foo is only returning results for minimizing programs to the system tray.
I am not even sure if these are system tray apps or taskbar apps. but either way, please help me find a API reference or even better a tutorial.
I want to make a Work Week Widgit, that displays the current work week number on this widget. I program mostly in python, but am willing to learn another language just to make this tool.
They are known as Desktop Bands, also known as DeskBands. Note that Desktop Bands are not recommended starting in Windows 7. Note also that since they are shell extensions, they must be written in native code.
I want to to deliver different iTunes buy links to a user depending on where there Region Format is set (in general settings).
Is there a way to detect that and then deliver a separate database to each users region?
Thanks for the help.
GS.
Assuming this is a web-based program, not Cocoa, the HTTP Accept-Language header appears to be all you've got to work with. Mobile Safari provides no information about Region Format, only Language.
This method, as well as a Region Format-based one, is flawed; more important is which iTunes Store the user has selected (the iTunes setting is transferred to the iPhone). This setting is independent of geographical location, Region Format and Language. You give the user a link to the wrong store, and it might well not work.
It shouldn't matter for free content, but I've had this problem myself with non-free content, as I often switch between the UK, US and German iTunes Stores (I prefer UK/US for podcasts, but can only buy from the German one).