Google Input Tools "API" -- can it be used? - google-api

I noticed that Google accepts transliteration and IME requests in any language through the url:
https://inputtools.google.com/request?text=$&itc=$&num=$\
&cp=0&cs=1&ie=utf-8&oe=utf-8&app=test
where $ is a variable below, for any language and text.
For example, French (try it):
var text = "ca me plait",
itc = "fr-t-i0-und",
num = 10;
// Result:
[
"SUCCESS",
[
[
"ca me plait",
[
"ça me plaît"
]
]
]
]
Or, Mandarin (try it):
var text = "shide",
itc = "zh-t-i0-pinyin",
num = 5;
// Result:
[
"SUCCESS",
[
[
"shide",
[
"使得",
"似的",
"是的",
"实德",
"似地"
],
[],
{
"annotation": [
"shi de",
"shi de",
"shi de",
"shi de",
"shi de"
]
}
]
]
]
All languages work and return great suggestions. The thing is I can't find documentation for this anywhere on the web, although it clearly looks like an API. Does anyone know if there is an official Google client or if they're okay with raw, unauthenticated requests?
It's used perhaps unofficially by plugins like jQuery.chineseIME.js, but I would appreciate any official usage information.

Whatever. I created my own plugin that uses it for Chinese, and can be extended easily: https://bitbucket.org/purohit/jquery.intlkeyboard.js.

Related

Argo Events: Use data filter in sensor to identify modified/added/removed path in mono-repo

I'm using Argo Events and Argo Workflow for my CI/CD chain, which works pretty neat. But I'm having some troubles setting up the data filter for the GitHub webhook payloads of my mono repo.
I'm trying to let the sensor only trigger the defined workflow if files were changed in a certain subpath. The payload contains three fields added, removed, modified. There the files are listed which were changed in this commit (webhook-events-and-payloads#push).
The paths I'm searching for is service/jobs/* and service/common*/*.
The filter I defined is:
- path: "[commits.#.modified.#(%\"*service*\")#,commits.#.added.#(%\"*service*\")#,commits.#.removed.#(%\"*service*\")#]"
type: string
value:
- "(\bservice/jobs\b)|(\bservice/common*)"
I valididated my filter in a tiny Go script, as gjson is used by Argo Events to apply the data filter.
package main
import (
"github.com/tidwall/gjson"
"regexp"
)
const json = `{
"commits": [
{
"added": [
],
"removed": [
],
"modified": [
"service/job-manager/README.md"
]
},
{
"added": [
],
"removed": [
"service/joby/something.md"
],
"modified": [
"service/job-manager/something.md"
]
},
{
"added": [
],
"removed": [
"service/joby/something.md"
],
"modified": [
"service/joby/someother.md"
]
}
],
"head_commit": {
"added": [
"service/job-manager/something.md"
],
"removed": [
"service/joby/something.md"
],
"modified": [
"service/job-manager/README.md"
]
}
}`
func main() {
value := gjson.Get(json, "[commits.#.modified.#(%\"*service*\")#,commits.#.added.#(%\"*service*\")#,commits.#.removed.#(%\"*service*\")#]")
println(value.String())
matched, _ := regexp.MatchString(`(\bservice/job-manager\b)|(\bservice/common*)`, value.String())
println(matched) // string is contained?
}
The script gives me the results I expect. But for the same webhook payload the workflow is not triggered, when adding the data filter to the sensor.
Someone any ideas?
UPDATED:
Thanks for the hint incl. body. in the paths.
I ended up setting the filters:
- path: "[body.commits.#.modified.#()#,body.commits.#.added.#()#,body.commits.#.removed.#()#]"
type: string
value:
- ".*service/jobs.*"
- ".*service/common.*"
Path shoud start with body.
Value should add escape special character with \\
So the data filter should be
- path: "[body.commits.#.modified.#(%\"*service*\")#,body.commits.#.added.#(%\"*service*\")#,body.commits.#.removed.#(%\"*service*\")#]"
type: string
value:
- "(\\bservice/jobs\\b)|(\\bservice/common*)"

Extracting json payload in shell script

I have a file like below. As you can see there are few lines/contents between curly braces. As there is multiple group of opened and closed curly braces, I want to get the content between the curly brances ({ and } ) for each line separatly.
Sample file:
{
"/tmp/©ƒ-4bf57ed2-velero/velero/templates/crds.yaml": [
],
"/tmp/velero-4bf57ed2-velero/velero/templates/deployment.yaml": [
],
"/tmp/velero-4bf57ed2-velero/velero/templates/restic-daemonset.yaml": [
],
"/tmp/velero-4bf57ed2-velero/velero/templates/secret.yaml": [
]
}
{
"/tmp/autoscaler-fb12fa7a-cluster-autoscaler/cluster-autoscaler/templates/deployment.yaml": [
".spec.replicas: '2' != '0'",
],
"/tmp/autoscaler-fb12fa7a-cluster-autoscaler/cluster-autoscaler/templates/servicemonitor.yaml": [
"error: the server doesn't have a resource type \"ServiceMonitor\"\n"
]
}
{
"/tmp/metrics-server-1960953a-metrics-server-certs/raw/templates/resources.yaml": [
"error: the server doesn't have a resource type \"Issuer\"\n",
"error: the server doesn't have a resource type \"Certificate\"\n"
]
}
Expected result: Need 3 seperated data chunks which is between the curly braces.
Could someone help me here?
If you have a sequence of valid JSON objects, you can use jq to easily and robustly process them:
Given file.jsons:
{
"/tmp/©ƒ-4bf57ed2-velero/velero/templates/crds.yaml": [ ""
],
"/tmp/velero-4bf57ed2-velero/velero/templates/deployment.yaml": [ ""
],
"/tmp/velero-4bf57ed2-velero/velero/templates/restic-daemonset.yaml": [ ""
],
"/tmp/velero-4bf57ed2-velero/velero/templates/secret.yaml": [ ""
]
}
{
"/tmp/autoscaler-fb12fa7a-cluster-autoscaler/cluster-autoscaler/templates/deployment.yaml": [
".spec.replicas: '2' != '0'"
],
"/tmp/autoscaler-fb12fa7a-cluster-autoscaler/cluster-autoscaler/templates/servicemonitor.yaml": [
"error: the server doesn't have a resource type \"ServiceMonitor\"\n"
]
}
{
"/tmp/metrics-server-1960953a-metrics-server-certs/raw/templates/resources.yaml": [
"error: the server doesn't have a resource type \"Issuer\"\n",
"error: the server doesn't have a resource type \"Certificate\"\n"
]
}
You can for example reformat each object as a single line:
$ jq -s -r 'map(#json) | join("\n")' < file.jsons
{"/tmp/©ƒ-4bf57ed2-velero/velero/templates/crds.yaml":[""],"/tmp/velero-4bf57ed2-velero/velero/templates/deployment.yaml":[""],"/tmp/velero-4bf57ed2-velero/velero/templates/restic-daemonset.yaml":[""],"/tmp/velero-4bf57ed2-velero/velero/templates/secret.yaml":[""]}
{"/tmp/autoscaler-fb12fa7a-cluster-autoscaler/cluster-autoscaler/templates/deployment.yaml":[".spec.replicas: '2' != '0'"],"/tmp/autoscaler-fb12fa7a-cluster-autoscaler/cluster-autoscaler/templates/servicemonitor.yaml":["error: the server doesn't have a resource type \"ServiceMonitor\"\n"]}
{"/tmp/metrics-server-1960953a-metrics-server-certs/raw/templates/resources.yaml":["error: the server doesn't have a resource type \"Issuer\"\n","error: the server doesn't have a resource type \"Certificate\"\n"]}
Now you can process it line by line without having to worry about matching up curly braces.
Thank you for your suggestion, the above jq would not work for all the json payload . For example for below json payload it is giving an error
{
"/tmp/ingress-dae7bd30-ingress-internet/nginx-ingress/templates/controller-deployment.yaml": [
".spec.replicas: '2' != '3'",
],
"/tmp/ingress-dae7bd30-ingress-internet/nginx-ingress/templates/controller-metrics-service.yaml": [
".spec.clusterIP: '' != '10.3.24.53'"
],
"/tmp/ingress-dae7bd30-ingress-internet/nginx-ingress/templates/controller-service.yaml": [
".spec.clusterIP: '' != '10.3.115.118'"
],
"/tmp/ingress-dae7bd30-ingress-internet/nginx-ingress/templates/controller-stats-service.yaml": [
".spec.clusterIP: '' != '10.3.115.30'"
],
"/tmp/ingress-dae7bd30-ingress-internet/nginx-ingress/templates/default-backend-deployment.yaml": [
]
}

How do you add a "PrivateIpAddresses" to a Network interface

I am looking for the output of the troposphere to look like this(json). I could not find any examples to point me in the right direction at all. And in the future if I come across similar issues, is there any documentation I should refer to in particular?
"NetworkInterfaces": [
{
"DeleteOnTermination": "true",
"Description": "Primary network interface",
"DeviceIndex": 0,
"SubnetId": "subnet-yolo",
"PrivateIpAddresses": [
{
"PrivateIpAddress": "xxx.xx.xx.xx",
"Primary": "true"
}
],
"GroupSet": [
"xxxxxx",
"yyyyyy"
]
}
]
Answer was pretty basic. First we need to make a sample_ip like so
sample_ip = template.add_parameter(ec2.PrivateIpAddressSpecification(
"PrivateIpAddress",
Primary="true",
PrivateIpAddress="172.168.1.1"
))
Then do this:
PrivateIpAddresses=[Ref(sample_ip)]
I'll keep this here in case some uber beginner like me is not able to do this on his/her own.

Why distance matrix api returning the status "ZERO_RESULTS" even coordinates are correctly mentioned?

{
"destination_addresses" : [ "14.868924,79.873609" ],
"origin_addresses" : [ "14.843799,79.862726" ],
"rows" : [
{
"elements" : [
{
"status" : "ZERO_RESULTS"
}
]
}
],
"status" : "OK"
}
url for this above response is given below only change the API key to verify it.
https://maps.googleapis.com/maps/api/distancematrix/json?units=metric&origins=14.843799,79.862726&destinations=14.868924,79.873609&mode=walking&key=XXXX
Although bing distance calculation api is successfully returning distance for this same coordinates.
I reported this issues to Google issue tracker and after that they fixed it within 1 month approximately.
https://issuetracker.google.com/u/1/issues/38478121
Thanks

Amazon Alexa Device Discovery for Smart Home API with Lambda Failing

I have setup an Alexa Smart Home Skill, all settings done, oauth2 processed done and skill is enabled on my Amazon Echo device. Lambda function is setup and linked to the skill. When I "Discover Devices" I can see the payload hit my Lambda function in the log. I am literally returning via the context.succeed() method the following JSON with a test appliance. However Echo tells me that it fails to find any devices.
{
"header": {
"messageId": "42e0bf9c-18e2-424f-bb11-f8a12df1a79e",
"name": "DiscoverAppliancesResponse",
"namespace": "Alexa.ConnectedHome.Discovery",
"payloadVersion": "2"
},
"payload": {
"discoveredAppliances": [
{
"actions": [
"incrementPercentage",
"decrementPercentage",
"setPercentage",
"turnOn",
"turnOff"
],
"applianceId": "0d6884ab-030e-8ff4-ffffaa15c06e0453",
"friendlyDescription": "Study Light connected to Loxone Kit",
"friendlyName": "Study Light",
"isReachable": true,
"manufacturerName": "Loxone",
"modelName": "Spot"
}
]
}
}
Does the above payload look correct?
According to https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/smart-home-skill-api-reference#discovery-messages the version attribute is required. Your response seems to be missing that attribute.
In my (very short) experience with this, even the smallest mistake in the response would generate a silent error like the one you are experiencing.
I had the same problem. If you are creating discovery for "Entertainment Device", make sure you have wrapped the output in 'event' key for context.succeed
var payload = {
endpoints:
[
{
"endpointId": "My-id",
"manufacturerName": "Manufacturer",
"friendlyName": "Living room TV",
"description": "65in LED TV from Demo AV Company",
"displayCategories": [ ],
"cookie": {
"data": "e.g. ip address",
},
"capabilities":
[
{
"interface": "Alexa.Speaker",
"version": "1.0",
"type": "AlexaInterface"
},
]
}
]
};
var header = request.directive.header;
header.name = "Discover.Response";
context.succeed({ event: {
header: header, payload: payload
} });
Although, in the sample code, this is never mentioned and an incorrect example is given (https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/steps-to-create-a-smart-home-skill). However, the response body provided includes the "event" key.
Recreating lambda function helped me fix the issue. I also set "Enable trigger" check button while creating, though I'm not sure if that matters. After that my device provided by skill was found successfully.
Edit: Answer was wrong. Only useful information was this
This context.fail syntax is actually deprecated. Look up the Lambda context object properties, it should look more like "callback(null, resultObj)" now.
Did you include the return statement in your function?
return {
"header": header,
"payload": payload
}
It was missing in the example and after adding it, I was able to 'discover' my device.

Resources