Settings API in WebOS - webos

I was wondering if there is a Mojo or Enyo equivalent to the Android Settings.System API where application can change settings like ring tone volume, time and date format, wifi on and off, et c?

Unfortunately this doesn't seem to be officially documented anywhere, which means either that no one had time to document it or the API is still in flux. Your best bet is to guess and check. The getPreferenceValues call should be helpful once you know what keys you're interested in.

Related

How is it possible to get tracked features from tango APIs used for motion tracking

As it is shown in Project Tango GTC Video, some local features are extracted and tracked for motion estimation that is then fused with accelerometer data.
Since any developer may need to track features to develop his/her apps, I was wondering if there would be a way to get those features through APIs.
Although it is possible to extract some point and retrieve their flow using estimated 6DOF pose returned by the APIs, it adds extra overhead. Another issue with this approach is that the pure visual flow (including outliers) is not achievable and is influenced by IMU data.
So my question is that if these features are tracked using hardware-accelerated algorithms, how can we get them using APIs without having to implement it and do a redundant task.
Any answer and suggestion would be appreciated.
It is straightforward to compile OpenCV for the Tango with nVidia's TADP package. Use 3.0r4. You may need to merge some OpenCV-4-Android bits but it's easy, and the ES examples will fail on the device but don't sweat it.
Google released the "Project Tango ADF Inspector" on play store - I haven't actually had any time to play with it, but its the first thing to offer any look inside that data - I think Google considers this data sensitive and is cautious in this area, with good reason - If you look for the starred "important" note on this page you should get a feel for the sensitivity of that issue.

Gesture Recognition in Google Glass

I need to know about a feature in Google glass, whether it is available or not. I have been told that google glass contains a feature called "Gesture Detection using Camera". In other words, a system where it responds to hand commands and signals. I believe this is true because I have seen some half baked articles, with full of uncertainty. If this is true and something like that exists for real, from where can I get more data?
Not possible with the current Glassware APIs but its reasonable to guess this would be feasible in the not-yet-released native SDK. In the meantime, its just speculation

Media players that can be programmatically controlled? (Ruby)

I already know that iTunes has an interface that I can control, but the API is a bit opaque and I can't find it documented anywhere. Does anyone know of any good open-source or at least well-working media players that can be programmatically controlled?
In particular, I would like to be able to search a media library for a song by title or artist, and play, pause, resume, stop the song.
Ruby would be nice, because I'm working in it, but C would work too. I could write a wrapper.
Edit: My solution has to work on Windows, as that is the environment I am developing in.
XMMS works on a server / client basis. This means that it is relatively easy to control the playback, and the song queue. I'm not sure how easy is to handle file metadata (song info), but maybe that part can be handled independently.
Check this guide to get an overview of functions you can use.
Back in the day, I used MPD.

Extracting information from a MAC OS X application

I have a simple problem, I will be straighforward.
Suppose I have a third-party cocoa application running that has a chat box inside. Well, I need to capture the text inside that chat-box in real time from another application and write a logfile in real time with that information.
I am sure there is a way, I just don't know where to start. I have experience with cocoa and objective C, I have some apps in the iphone app store.
Thank you very much
Unless the app is suitably scriptable (e.g. AppleScript) or has some kind of external API then you're not going to be able to do this.
In short: Contact the developer of the application, but don't get your hopes up.
Unfortunately, in this day and age of protected memory and whatnot, we more or less have to be content with what the applications give us to work with.
However: You are not entirely without recourse. Using F-Script you might be able to attach to the process and cause some controller or other to emit notifications that you can capture and log.
Edit: If, as appears to be the case, it's a Carbon application, you are well and truly hosed:
F-script and similar is unlikely to be possible.
Even if it is, trying injection on a Carbon app, that is to say, a C++ app, is likely to be an exercise in futility and disappointment, if not completely impossible.
Seeing as how Carbon is deprecated (and how!), the application is unlikely to be updated with a proper API for that sort of thing.
All of the above.
Reedit: One tiny little aber; it is possible, although unlikely, that you can achieve something using Interface Scripting, but again; I wouldn't get my hopes up.

How to implement a voice changer?

I want to write a app which change the microphone input voice and make it like robot or some funny man's voice.It must support send changed voice to all application like IM Software or Game Client. Which technology should I pick up? Windows WaveForm Api? DirectX?
audio driver?
Thank you very much!
There's an MSDN Coding4Fun article that explains how to create a voice changer that operates over Skype, in C# (.NET). The full source code is also hosted as a project on CodePlex. In addition, it should be fairly easy do something else with the audio (as opposed to streaming it via Skype), since the project is based around the NAudio framework, which contains a good level of abstraction. Anyway, it is a reasonably complete (and stable) example - definitely worth checking out in my opinion.
If you want/need to use C++ or some other language for development, then this project should at least give you some ideas about how to go about it. Still, if you can use .NET, then you're in luck I think.
Robot voice is often done with a ring modulator effect, mixing the voice with a sine wave - this is easier. Or use a vocoder effect, modulating the voice onto some other waveform, like rectangle - might be a bit more tricky. Go read up how the effects work, get a program with which you can check out how they sound (Audacity works for the ring modulator, finding and using a vocoder may be a bit harder). Then read how it's done or get a library which will do the processing for you.
You are looking to support VSTi or DXi plugins.
There are tons that also act as vocoders, even for free.
You just need to write the host application.
Take a look here :)
Now that's a neat idea, especially for a mobile app.
I'd probably start off-line by using a .wav file as input to get the effects working the way I wanted. You can use any high level language for this, but you probably want something that will map reasonably well into C/C++.
In terms of a production version, I'd go native and do this in C or C++. You want something fast for real time audio processing & I like to avoid dependencies on things like .net for distribution. (Not that I have anything against .net, it's great for servers and distribution within a company but I'm not so keen on having it as a dependency for shrink wrap software.)
Windows DirectShow would be a tempting option - you could do some interesting effects with multi-media as well if you had the voice morpher implemented as a direct show filter.
What you're looking for is a vocoder. I don't know if any of the technologies listed above has a vocoder effect, but the best chance would be with DirectX.
Try this sample app .I think its useful to you.Link

Resources