Since I installed the Google Fit app on my Nexus 5 it has been tracking my step count and time spent walking. I'd like to retrieve this info via the Google Fitness REST api (docs) but I can't work out how to get any of that data from the REST api.
I've used the OAuth 2.0 playground to successfully list dataSources but none of the examples I have tried have returned any fitness data whatsoever. I feel like I need to use something similar to a DataReadRequest from the (Android SDK) but I'm not building an Android app -- I just want to access fitness data already stored by the Google Fit app.
Is it even possible to get the data gathered by the Google Fit app? If so, how can I read and aggregate step count data using the REST api?
It turns out that the answer is in the docs after all. Here is the format of the request.
GET https://www.googleapis.com/fitness/v1/users/{userId}/dataSources/{dataSourceId}/datasets/{datasetId}
The only supported {userId} value is me (with authentication).
Possible values for {dataSourceId} are avaiable by running a different request.
The bit I missed was that {datasetId} is not really an ID, but actually where you define the timespan in which you are interested. The format for that variable is {startTime}-{endTime} where the times are in nanoseconds since the epoch.
I was able to get this working by going through the google php client and noticed that they append their start and finish times for the GET request with extra 0's - nine infact.
Use the same GET request format as mentioned in an answer above:
https://www.googleapis.com/fitness/v1/users/{userId}/dataSources/{dataSourceId}/datasets/{datasetId}
Now here is an example with the unix timestamp (php's time() function uses this)
https://www.googleapis.com/fitness/v1/users/me/dataSources/derived:com.google.step_count.delta:com.google.android.gms:estimated_steps/datasets/1470475368-1471080168
This is the response I get:
{
"minStartTimeNs": "1470475368",
"maxEndTimeNs": "1471080168",
"dataSourceId":
"derived:com.google.step_count.delta:com.google.android.gms:estimated_steps
}
However if you append your start and finish times with nine 0's that you put in your GET requests and shape your request like this:
https://www.googleapis.com/fitness/v1/users/me/dataSources/derived:com.google.step_count.delta:com.google.android.gms:estimated_steps/datasets/1470475368000000000-1471080168000000000
It worked - this is the response I got:
{
"minStartTimeNs": "1470475368000000000",
"maxEndTimeNs": "1471080168000000000",
"dataSourceId":
"derived:com.google.step_count.delta:com.google.android.gms:estimated_steps",
"point": [
{
"modifiedTimeMillis": "1470804762704",
"startTimeNanos": "1470801347560000000",
"endTimeNanos": "1470801347567000000",
"value": [
{
"intVal": -3
}
],
"dataTypeName": "com.google.step_count.delta",
"originDataSourceId": "raw:com.google.step_count.delta:com.dsi.ant.plugins.antplus:AntPlus.0.124"
},
The response is a lot longer but I truncated it for the sake of this post. So when passing your datasets parameter into the request:
1470475368-1471080168 will not work, but 1470475368000000000-1471080168000000000 will.
This did the trick for me, hopes it helps someone!
I tried post method with below URL & body. This will work, please check inline comments too.
Use URL: https://www.googleapis.com/fitness/v1/users/me/dataset:aggregate
Method: POST
Body:
{
"aggregateBy": [{
"dataTypeName": "com.google.step_count.delta",
"dataSourceId": "derived:com.google.step_count.delta:com.google.android.gms:estimated_steps"
}],
"bucketByTime": { "durationMillis": 86400000 }, // This is 24 hours
"startTimeMillis": 1504137600000, //start time
"endTimeMillis": 1504310400000 // End Time
}
Related
I am trying to use the Ruby SDK to upload videos to YouTube automatically. Inserting a video, deleting a video, and setting the thumbnail for a video works fine, but for some reason trying to add captions results in an invalid metadata client error regardless of the parameters I use.
I wrote code based on the documentation and code samples in other languages (I can't find any examples of doing this in Ruby with the current gem). I am using the google-apis-youtube_v3 gem, version 0.22.0.
Here is the relevant part of my code (assuming I have uploaded a video with id 'XYZ123'):
require 'googleauth'
require 'googleauth/stores/file_token_store'
require 'google-apis-youtube_v3'
def authorize [... auth code omitted ...] end
def get_service
service = Google::Apis::YoutubeV3::YouTubeService.new
service.key = API_KEY
service.client_options.application_name = APPLICATION_NAME
service.authorization = authorize
service
end
body = {
"snippet": {
"videoId": 'XYZ123',
"language": 'en',
"name": 'English'
}
}
s = get_service
s.insert_caption('snippet', body, upload_source: '/path/to/my-captions.vtt')
I have tried many different combinations, but the result is always the same:
Google::Apis::ClientError: invalidMetadata: The request contains invalid metadata values, which prevent the track from being created. Confirm that the request specifies valid values for the snippet.language, snippet.name, and snippet.videoId properties. The snippet.isDraft property can also be included, but it is not required. status_code: 400
It seems that there really is not much choice for the language and video ID values, and there is nothing remarkable about naming the captions as "English". I am really at a loss as to what could be wrong with the values I am passing in.
Incidentally, I get exactly the same response even if I just pass in nil as the body.
I looked at the OVERVIEW.md file included with the google-apis-youtube_v3 gem, and it referred to the Google simple REST client Usage Guide, which in turn mentions that most object properties do not use camel case (which is what the underlying JSON representation uses). Instead, in most cases properties must be sent using Ruby's "snake_case" convention.
Thus it turns out that the snippet should specify video_id and not videoId.
That seems to have let the request go through, so this resolves this issue.
The response I'm getting now has a status of "failed" and a failure reason of "processingFailed", but that may be the subject of another question if I can't figure it out.
how can I get ALL records from route53?
referring code snippet here, which seemed to work for someone, however not clear to me: https://github.com/aws/aws-sdk-ruby/issues/620
Trying to get all (I have about ~7000 records) via resource record sets but can't seem to get the pagination to work with list_resource_record_sets. Here's what I have:
route53 = Aws::Route53::Client.new
response = route53.list_resource_record_sets({
start_record_name: fqdn(name),
start_record_type: type,
max_items: 100, # fyi - aws api maximum is 100 so we'll need to page
})
response.last_page?
response = response.next_page until response.last_page?
I verified I'm hooked into right region, I see the record I'm trying to get (so I can delete later) in aws console, but can't seem to get it through the api. I used this: https://github.com/aws/aws-sdk-ruby/issues/620 as a starting point.
Any ideas on what I'm doing wrong? Or is there an easier way, perhaps another method in the api I'm not finding, for me to get just the record I need given the hosted_zone_id, type and name?
The issue you linked is for the Ruby AWS SDK v2, but the latest is v3. It also looks like things may have changed around a bit since 2014, as I'm not seeing the #next_page or #last_page? methods in the v2 API or the v3 API.
Consider using the #next_record_name and #next_record_type from the response when #is_truncated is true. That's more consistent with how other paginations work in the Ruby AWS SDK, such as with DynamoDB scans for example.
Something like the following should work (though I don't have an AWS account with records to test it out):
route53 = Aws::Route53::Client.new
hosted_zone = ? # Required field according to the API docs
next_name = fqdn(name)
next_type = type
loop do
response = route53.list_resource_record_sets(
hosted_zone_id: hosted_zone,
start_record_name: next_name,
start_record_type: next_type,
max_items: 100, # fyi - aws api maximum is 100 so we'll need to page
)
records = response.resource_record_sets
# Break here if you find the record you want
# Also break if we've run out of pages
break unless response.is_truncated
next_name = response.next_record_name
next_type = response.next_record_type
end
I am trying to prototype a trigger using the Zapier CLI and I am running to an issue with the 'Pull In Samples' section when setting up the trigger in the UI.
This tries to pull in a live sample of data to use, however the documentation states that if no results are returned it will use the sample data that is configured for the trigger.
In most cases there will be no live data and so ideally would actually prefer the sample data to be used in the first instance, however my trigger does not seem to ever use the sample and I have not been able to find a concrete example of a 'no results' response.
The API I am using returns XML so I am manipulating the result into JSON which works fine if there is data.
If there are no results so far I have tried returning '[]', but that just hangs and if I check the zapier http logs it's looping http requests until I cancel the sample check.
Returning '[{}]' returns an error that I need an 'id' field.
The definition I am using is:
module.exports = {
key: 'getsmsinbound',
noun: 'GetSMSInbound',
display: {
label: 'Get Inbound SMS',
description: 'Check for inbound SMS'
},
operation: {
inputFields: [
{ key: 'number', required: true, type: 'string', helpText: 'Enter the inbound number' },
{ key: 'keyword', required: false, type: 'string', helpText: 'Optional if you have configured a keyword and you wish to check for specific keyword messages.' },
],
perform: getsmsinbound,
sample: {
id: 1,
originator: '+447980123456',
destination: '+447781484146',
keyword: '',
date: '2009-07-08',
time: '10:38:55',
body: 'hello world',
network: 'Orange'
}
}
};
I'm hoping it's something obvious as on scouring the web and Zapier documentation I've not had any luck!
Sample data must be provided from your app and the sample payload is not used for this poll specifically. From the docs:
Sample results will NOT be used for a user's Zap testing step. That
step requires data to be received by an event or returned from a
polling URL. If a user chooses to "Skip Test", then the sample result,
if provided, will be used.
Personally, I have never seen "Skip Test" show up. A while back I asked support about this:
That's a great question! It's definitely one of those "chicken and
egg" situations when using REST Hooks - if there isn't a sample
available, then everything just stalls.
When the Zap editor tries to obtain a "sample result", there are three
places where it's going to look:
The Polling endpoint (in Step #3 of your trigger's setup) is invoked for the current user. If that returns "nothing", then the Zap
editor will try the next step.
The "most recent record/data" in the Zap's history. Since this is a brand new Zap, there won't be anything present.
The Sample result (in Step #4 of your trigger's setup). The Zap editor will tell the user that there's "nothing to show", and will
give the user the option to "skip test and continue", which will use
the sample JSON that you've provided here.
In reality, it will just continue to retry the request over and over and never provide the user with a "skip test and continue" option. I just emailed again asking if anything has changed since then, but it looks like existing sample data is a requirement.
Perhaps create a record in your API by default and hide it from normal use and just send back that one?
Or send back dummy data even though Zapier says not to. Not sure, but I don't know how people can set up a zap when no data has been created yet (and Zapier says not many of their apps have this issue, but nearly every trigger I've created and ever use case for other applications would hint to me otherwise).
Has anyone used Ruby neo4j-core to mass process data? Specifically, I am looking at taking in about 500k lines from a relational database and insert them via something like:
Neo4j::Session.current.transaction.query
.merge(m: { Person: { token: person_token} })
.merge(i: { IpAddress: { address: ip, country: country,
city: city, state: state } })
.merge(a: { UserToken: { token: token } })
.merge(r: { Referrer: { url: referrer } })
.merge(c: { Country: { name: country } })
.break # This will make sure the query is not reordered
.create_unique("m-[:ACCESSED_FROM]->i")
.create_unique("m-[:ACCESSED_FROM]->a")
.create_unique("m-[:ACCESSED_FROM]->r")
.create_unique("a-[:ACCESSED_FROM]->i")
.create_unique("a-[:ACCESSED_FROM]->r")
.create_unique("i-[:IN]->c")
.exec
However doing this locally it takes hours on hundreds of thousands of events. So far, I have attempted the folloiwng:
Wrapping Neo4j::Connection in a ConnectionPool and multi-threading it - I did not see much speed improvements here.
Doing tx = Neo4j::Transaction.new and tx.close every 1000 events processed - looking at a TCP dump, I am not sure this actually does what I expected. It does the exact same requests, with the same frequency, but just has a different response.
With Neo4j::Transaction I see a POST every time the .query(...).exec is called:
Request: {"statements":[{"statement":"MERGE (m:Person{token: {m_Person_token}}) ...{"m_Person_token":"AAA"...,"resultDataContents":["row","REST"]}]}
Response: {"commit":"http://localhost:7474/db/data/transaction/868/commit","results":[{"columns":[],"data":[]}],"transaction":{"expires":"Tue, 10 May 2016 23:19:25 +0000"},"errors":[]}
With Non-Neo4j::Transactions I see the same POST frequency, but this data:
Request: {"query":"MERGE (m:Person{token: {m_Person_token}}) ... {"m_Person_token":"AAA"..."c_Country_name":"United States"}}
Response: {"columns" : [ ], "data" : [ ]}
(Not sure if that is intended behavior, but it looks like less data is transmitted via the Non-Neo4j::Transaction technique - highly possibly I am doing something incorrectly)
Some other ideas I had:
* Post process into a CSV, SCP up and then use the neo4j-import command line utility (although, that seems kinda hacky).
* Combine both of the techniques I tried above.
Has anyone else run into this / have other suggestions?
Ok!
So you're absolutely right. With neo4j-core you can only send one query at a time. With transactions all you're really getting is the ability to rollback. Neo4j does have a nice HTTP JSON API for transactions which allows you to send multiple Cypher requests in the same HTTP request, but neo4j-core doesn't currently support that (I'm working on a refactor for the next major version which will allow this). So there are a number of options:
You can submit your requests via raw HTTP JSON to the APIs. If you still want to use the Query API you can use the to_cypher and merge_params methods to get the cypher and params for that (merge_params is a private method currently, so you'd need to send(:merge_params))
You can load via CSV as you said. You can either
use the neo4j-import command which allows you to import very fast but requires you to put your CSV in a specific format, requires that you be creating a DB from scratch, and requires that you create indexes/constraints after the fact
use the LOAD CSV command which isn't as fast, but is still pretty fast.
You can use the neo4apis gem to build a DSL to import your data. The gem will create Cypher queries under the covers and will batch them for performance. See examples of the gem in use via neo4apis-twitter and neo4apis-github
If you are a bit more adventurous, you can use the new Cypher API in neo4j-core via the new_cypher_api branch on the GitHub repo. The README in that branch has some documentation on the API, but also feel free to drop by our Gitter chat room if you have questions on this or anything else.
If you're implementing a solution which is going to make queries like above where you have multiple MERGE clauses, you'll probably want to profile your queries to make sure that you are avoiding the eager (that post is a bit old and newer versions of Neo4j have alleviated some of the need for care, but you can still look for Eager in your PROFILE)
Also worth a look: Max De Marzi's post on Scaling Cypher Writes
Brief: I'm using RethinkDB's change feeds to watch changes on a specific document (not the entire table). Each record looks like this:
{
"feedback": [ ],
"id": "bd808f27-bf20-4286-b287-e2816f46d434" ,
"projectID": "7cec5dd0-bf28-4858-ac0f-8a022ba6a57e" ,
"timestamp": Tue Aug 25 2015 19:48:18 GMT+00:00
}
I have one process that is appending items to the feedback array, and another process that needs to watch for changes on the feedback array... and then do something (specifically, broadcast only the last item appended to feedback via websockets). I've got it wired up so that it will monitor updates to the entire document - however, it requires receiving the complete document, then getting just the last item in the feedback array. This feels overly heavy, when all I need to get back is the last thing added.
Current code used to update the document:
r.table('myTable').get(uuid)
.update({feedback:r.row('feedback').append('message 1')})
.run(conn, callback)(...}
^ That will run several times over the course of a minute or so, adding the latest message to 'feedback'.
Watching changes:
r.table('myTable').get(uuid)
.changes()
.run(conn, function(err, cursor){
cursor.each(function(err, row){
var indexLast = row.old_val ? row.old_val.feedback.length : 0,
nextItem = row.new_val.feedback[indexLast];
// ... do something with nextItem
})
})
Finally, here's the question (2 parts really):
1: When I'm updating the document (adding to feedback), do I have to run an update on the entire document (as in my code above), or is it possible to simply append to the feedback array and be done with it?
2: Is the way I'm doing it above (receiving the entire document and plucking the last element from feedback array) the only way to do this? Or can I do something like:
r.table('myTable').get(uuid)
.changes()
.pluck('feedback').slice(8) // <- kicking my ass
.run(conn, function(err, cursor){
cursor.each(function(err, row){
var indexLast = row.old_val ? row.old_val.feedback.length : 0,
nextItem = row.new_val.feedback[indexLast];
// ... do something with nextItem
})
})
Let's go over your questions
1: When I'm updating the document (adding to feedback),
do I have to run an update on the entire document (as in my code above),
No, you don't. As you did, you only update feedback field. Not an entirely document, doesn't it?
or is it possible to simply append to the feedback array and be done with it?
It's possible. And you already do it.
The way it writes looks like your client driver has to fetch content of feedback array, then append a new element, and update the whole content back. But that's not the case here. The whole query r.row('feedback').append('message 1') is serialized as a JSON string and pass to the RethinkDB. RethinkDB run it, atomically, on the server. The content of feedback and the appending isn't done on client nor sending back to server.
If you used tcpdump like this:
tcpdump -nl -w - -i lo0 -c 500 port 28015|strings
You can see this JSON string is sender to RethinkDB server when you run your query:
[1,[53,[[16,[[15,["myTable"]],1]],[69,[[2,[3]],{"feedback":[29,[[170,[[10,[3]],"feedback"]],"doc2"]]}]]]],{}]
Yes, that single JSON query was transmited over network, not the whole document. Hope it makes sense. More information of that JSON string can be found on http://rethinkdb.com/docs/writing-drivers/ and https://github.com/neumino/rethinkdbdash/blob/master/lib/protodef.js#L84
2: Is the way I'm doing it above (receiving the entire document and plucking the last element from feedback array) the only way to do this? Or can I do something like:
Ideally we will want to use bracket to get a field of document, and listen for change on that field. Unfortunately it doesn't work yet. So we have to use map to transform it. Again, this run on server and only the last document transmit to client, not the whole document:
r.table('myTable').get(1).changes().map(function(doc) {
return doc('new_val')('feedback').default([]).nth(-1)
})