Google Assistant Smart Home : Is Google smart home support custom command for water Heater devices? - google-home

I have a number of devices. I want to control it through google smart home. Is it possible to create custom commands ? I want to develop commands like "I want to take a shower".
can I custom the trait? Thank you. Is there any support for circulation action?

Is it possible to create custom commands ?
Here is an alternative way that allows you to use custom commands to control your device, please see Set up and manage Routines.
If you’d like to use “I want to take a shower” command to open your water heater, please follow suggestions below
Choosing Voice command to trigger the Routine when you add a Starter
Choosing Custom command when you set an Action on the app
Test the command through Google Assistant app

Related

Actions on Google not showing device registration option

I am trying to use google assistant in a raspberry pi project. I have created my developer project. However, there is no option to register the device model as instructed in Google Assistant SDK for Devices ->
https://developers.google.com/assistant/sdk/guides/library/python/embed/register-device
As a result i am not able to execute the sample code.
The link was also not there for me. I just started guessing at the URL until I found it:
https://console.actions.google.com/u/0/project/{your-project-name}/deviceregistration/
This allows you to follow the instructions in their docs to register a device.
Since i was not able to do it from the front end consol, I tried the "Alternative ways to register". I was able to register the device eventually by using the Registration Tool Commands mentioned at https://developers.google.com/assistant/sdk/reference/device-registration/device-tool#register-device
I did it after downloading the credentials file and authenticating it. However, i believe even authentication can be done with these commands.
When you create a new project, you'll see a bunch of cards like "Build an X action". Click "Skip" in the corner to create a blank project that will have the device registration tab available.

Xamarin voice command by car

I'm looking for the best approach to implement voice command in a Xamarin app.
Here are my requirements:
I don't need to launch my app by voice. Instead, my users will launch the app through touch (so, when the app is not running, no voice recognition is needed by my app)
My app is a client/server app and it will work always on (the backend will run on azure)
My app will be used primarily by car (so consider environment noise)
My app will work in many languages, such as Italian, Spanish, French and English
My app should be developed with xamarin (and eventually mvvmcross or similar)
In my app there will be two kinds of voice commands:
to select an item from a short list: app will show a list of items, such as "apple, kiwi, banana and strawberry" and user will have to say one of those words.
to change current view. Typically these voice commands will be something like "cancel", "confirm", "more" and stuff like these
The typical interaction between user, app and server should be this:
user says one of the available commands in current view/activity/page
suppose here that the user perfectly knows which commands he/she can use, it does no matter now how he/she knows these commands (he/she just knows them)
user could put before the commands some special words, such as "hey 'appname'", to have a command like "hey 'appname', confirm"
Note: the "hey 'appname'" part of the voice command has the only purpose to allow the app to know when the command starts. The app can be always in listening mode, but has to avoid to send the audio stream continuously to the server to recognize commands
best case is if app would recognize these commands locally, without involve the remote server, since the voice commands are predefined and well-known in each view. Anyway, app can send the audio wave to the server which will return a string (in this example the text returned will be "confirm", since the audio was "hey 'appname', confirm")
app will map the text recognized with the available commands, and will invoke the right one
user will receive a feedback by the app. The feedback could be:
voice feedback (text-to-speech)
visual feedback (something on the screen)
both above
I was looking for azure-cognitive-services, but in this case, as far as I've understood, there is no way to recognize the start of the command locally (everything works on server side through REST api or clients). So the user would have to press a button before every voice command, and I need to avoid this kind of interaction.
Since the app is running, my user has him/her hands on the steering wheel, and he/she can't touch everytime the display. Isn't it?
Moreover, I was looking for cortana-skills-kit and botframework, but:
It seems that Cortana Skills are available in English only
Actually, I don't need to involve Cortana to launch my app
I don't have experiences on these topics, so, hope that my question is clear and, generally speaking, that can be useful for other newbie users as well.
* UPDATE 1 *
The Speech Recognition with the Voice Command Definition (VCD) file is really close to what I'd need, because:
it has a way to activate the command through a command name shortcut
It works in foreground (and background as well, even if in my case I don't need the background)
Unfortunately, this service works only on Windows, since it uses the local API. Maybe the right approach could be based on the following considerations:
Every platform exposes a local speech recognition api (Cortana, Siri, Google Now)
Xamarin exposes Siri and Google Now apis and make them available through C#
It would be useful to create a facade component to expose the three different local speech api through a common interface
I'm wondering if there is something other solution to this. Cortana, as personal assistant, is available on Windows, iOS and Android. Since Cortana works both with local api and with remote service (Cortana Skills), is Cortana the right approach? Has Cortana the support for many languages (or, at least, has the support a road map)?
So, just some thoughts here. If you have some other ideas, or suggestions, please add here. Thanks

Listing installed applications in WP7

In my application I need to share an item through various way (like facebook, twitter, linkedin etc). So I need to list the application installed in my phone show that I can share via any one appplication. So can anybody help me out?
It is not possible to know what other applciations are installed on a phone. Having access to this information would be a potential data privacy issue.
Windows Phone 7 has builtin support for this kind of actions with the launchers / choosers. Within the list of launchers you will find the ShareStatusTask which is opening the builtin 'share your status' control. This control checks your phone device for the profiles/networks to which you have connected. By using the Status property of the Task you can than fill in the message you want to share. In the opened control you can than choose on which networks you want to share your message.
See below code sample on how to use this task:
ShareStatusTask shareStatusTask = new ShareStatusTask();
shareStatusTask.Status = "Share my status on different networks";
shareStatusTask.Show();
NOTE: If you start a launcher from your app, your app will be deactived. Normally after completing the taks your app will be reactivated. For the overview of Launchers / choosers you can have a look at Launchers and Choosers Overview for Windows Phone

OSX Carbon: Quartz event taps to get keyboard input

I want to get Keyboard Input on OSX using c++ without using Cocoa, deprecated Carbon UPP handlers and if possible without using IOHID since that's alot of extra work.
I allready implemented a simple mouse class using quartz event taps and it works like a charm and now I'd like to use them to implement a keyboard class. Anyways as the reference states under CGEventTapCreate:
http://developer.apple.com/library/mac/#documentation/Carbon/Reference/QuartzEventServicesRef/Reference/reference.html
you can only access key events if one of the following is true:
The current process is running as the root user.
Access for assistive devices is enabled. In Mac OS X v10.4, you can
enable this feature using System Preferences, Universal Access
panel, Keyboard view.
that is a very serious limitation since I also want my application to work without any weird settings. Is there any way to work around this? If not, is there any alternative to using Taps in Carbon?
Thanks!
The easiest way is to use the semi-deprecated Carbon function RegisterEventHotKey, see this SO Q&A, for example.
If not, you need to live with that restriction. The restriction is there to prevent a bad person to install keylogger behind the scenes. You need to ask the user to open the preferences, type the admin password etc.
You could try to use AXMakeProcessTrusted. This is supposed to be the same as Access for assistive devices on a per process basis.

Autostart Mac application programmatically

We have a cross platform application. The application has a feature to autostart it once the user logs in. How to do this in mac? from within the application. Manually adding it Login Items works but I am looking for how to do it using an API or something similar.
If it's a GUI app adding it as a login item is the best way to go. Apple's dev note on the subject lists 3 ways to do this: with the Shared File Lists API, via Apple Events, or with the CFPreferences API.
You have to create a launchd property list file and place it in ~/Library/LaunchAgents or /Library/LaunchAgents, depending if you want the change system-wide or only for the current user.
This guide from Apple will help you accomplish that task.

Resources