Google Fit API - International users sync issue - google-api

I'm using the Google Fit API Rest
Here is the data I'm retrieving from Google Fit using the API:
2021-03-21 29989 Steps
2021-03-20 12 Steps
Here is the data the user exported from Google:
3/22/2021 16,480 Steps
3/21/2021 13,521 Steps
In both circumstances, the steps equal 30,001
The dates are clearly off by one day because of the time zone. The daily count is also off for the same reason, however, it added up to the same steps.
What general approach/strategy can I take to get the steps obtained from the API match those on Google Fit when I don't have a timezone?
My API currently loops through the database and syncs all user data, not distinguishing domestic vs international users.
Here is the code snippet used to get steps:
//***** Get steps
case DATATYPE_STEP_COUNT_DELTA:
if ($dataStreamId == 'derived:com.google.step_count.delta:com.google.android.gms:estimated_steps') {
$listDatasets = $dataSets->get("me", $dataStreamId, $startTime . '000000000' . '-' . $endTime . '000000000');
if ($debug == 1) PrintR($listDatasets,"DATATYPE_STEP_COUNT_DELTA");
$step_count = 0;
foreach ($listDatasets as $dataSet) {
if ($dataSet['startTimeNanos']) {
$sec = $dataSet['startTimeNanos'] / 1000000000;
$activity_date = date('Y-m-d', $sec);
$dataSetValues = $dataSet['value'];
if ($dataSetValues && is_array($dataSetValues)) {
foreach ($dataSetValues as $dataSetValue) {
if(!isset($stepsArr[$studentencodedid][$activity_date])) $stepsArr[$studentencodedid][$activity_date] = 0;
$stepsArr[$studentencodedid][$activity_date] += $dataSetValue['intVal'];
$step_count += $dataSetValue['intVal'];
}
}
}
}
}
break;
//***** End get steps

Related

How to automatically get basic monitoring info such as CPU and Memory Usage from the IBM i?

I'm trying to get some basic performance data (such as CPU and Memory Usage) from the IBM i every minute or so.
Then I'm creating a Web App, which will display all of this in a centralized dashboard and also notify the user for any unusual values/events.
All I need is some kind of parsable data output from IBM i; could be JSON, CSV, perhaps even ODBC,...
I already tried running commands to get spool output, but that's not consistent so it can't really be parsed. The latest thing I found is collecting CSV files, but that is not automatic.
Inside the "IBM i Navigator -> Performance -> Investigate Data" there is an option to show a graph with my required data and it's even possible to export it as CSV.
However, I was wondering if it's possible to GET this data via a HTTP Request as JSON? I was searching around and found mentions of "Integrated Web Services" and "CICS Transactions Server HTTP Requests", but nothing specific on getting existing data, only on creating your own.
https://www.ibm.com/docs/en/cics-ts/5.3?topic=protocol-http-requests
https://www.ibm.com/docs/en/i/7.3?topic=tasks-integrated-web-application-server
Thank you!
I don't know if the data you search for available through a web request. What is the greater goal you want to achieve? Just curiosity? Centralized Monitoring for erratic values?
Usually, the requested class of data is exposed in more or less real time via SNMP and easily accessible by existing monitoring applications. It uses UDP and is much more efficient in terms of processor overhead than web requests.
The graphs you mention might be derived from the Performance Tools, something akin to sar & friends on Linux/Unix. However, this data is also not exported via web request. I think there are API calls within the OS to access this data. See Performance Tools for an overview.
Of course, this data is saved in tables and can be accessed via ODBC from outside IBM i, but I question the effort resulting from the probable lack of documentation about the table structure to be beneficial.
the system exposes all sorts of performance info as SQL table functions. here is the active job info table function
PHP can be used to write a web service which first calls the table function, then returns the resulting data as a JSON data stream.
<?php
$showColNameArr = array("JOB_NAME", "SUBSYSTEM", "JOB_TYPE", "FUNCTION", "FUNCTION_TYPE",
"JOB_STATUS", "CPU_TIME" ) ;
header("Content-type: text/javascript; charset=utf-8;");
// access an input, posted json object.
$postContents = file_get_contents('php://input') ;
$postObject = json_decode( $postContents ) ;
$action = isset($postObject->action) ? $postObject->action : '' ;
{
$conn = as400Connect('qgpl qtemp') ;
$sql = "SELECT *
from TABLE(QSYS2.ACTIVE_JOB_INFO( ))" ;
$stmt = db2_prepare($conn, $sql) ;
$result = db2_execute($stmt) ;
$colNames = db2Stmt_GetColNames( $stmt ) ;
$finalArr = array( ) ;
while( $row = db2_fetch_array($stmt))
{
$assocArr = array( ) ;
for( $jx = 0 ; $jx < sizeof($row) ; ++$jx )
{
$colName = $colNames[$jx] ;
if ( in_array( $colName, $showColNameArr ))
{
$vlu = $row[$jx] ;
$assocArr[$colName] = $vlu ;
}
}
$finalArr[] = $assocArr ;
}
echo json_encode( $finalArr ) ;
}
// ---------------------------- as400Connect ------------------------
function as400Connect( $libl )
{
$options = array('i5_naming' => DB2_I5_NAMING_ON);
if (strlen($libl) > 0)
{
$options['i5_libl'] = $libl ;
}
$conn = db2_connect("*LOCAL","","", $options);
if (!$conn) {
echo "Connection failed" ;
echo "<br>" ;
echo db2_conn_errormsg( ) ;
exit( ) ;
}
return $conn ;
}
// --------------------- db2Stmt_GetColNames ----------------
// build and return array of column names from a db2_execute
// executed $stmt.
function db2Stmt_GetColNames( $stmt )
{
$colCx = db2_num_fields($stmt);
$colNames = array( ) ;
for( $ix=0; $ix < $colCx; $ix++ )
{
array_push( $colNames, db2_field_name( $stmt, $ix )) ;
}
return $colNames ;
}
?>

Getting non overlapping between two dates with Carbon

UseCase: Admin assigns tasks to People. Before we assign them we can see their tasks in a gantt chart. According to the task assign date and deadline, conflict days (overlap days) are generated between tasks.
I wrote this function to get overlapping dates between two dates. But now I need to get non overlapping days between two dates, below is the function I wrote.
$tasks = Assign_review_tasks::where('assigned_to', $employee)
->where('is_active', \Constants::$REVIEW_ACTIVE)
->whereNotNull('permit_id')->get();
$obj['task'] = count($tasks);
// count($tasks));
if (count($tasks) > 0) {
if (count($tasks) > 1) {
$start_one = $tasks[count($tasks) - 1]->start_date;
$end_one = $tasks[count($tasks) - 1]->end_date;
$end_two = $tasks[count($tasks) - 2]->end_date;
$start_two = $tasks[count($tasks) - 2]->start_date;
if ($start_one <= $end_two && $end_one >= $start_two) { //If the dates overlap
$obj['day'] = Carbon::parse(min($end_one, $end_two))->diff(Carbon::parse(max($start_two, $start_one)))->days + 1; //return how many days overlap
} else {
$obj['day'] = 0;
}
// $arr[] = $obj;
} else {
$obj['day'] = 0;
}
} else {
$obj['day'] = 0;
}
$arr[] = $obj;
start_date and end_date are taken from database,
I tried modifying it to,
(Carbon::parse((min($end_one, $end_two))->add(Carbon::parse(max($start_two, $start_one))))->days)->diff(Carbon::parse(min($end_one, $end_two))->diff(Carbon::parse(max($start_two, $start_one)))->days + 1);
But it didn't work, in simple terms this is what I want,
Non conflicting days = (end1-start1 + end2-start2)- Current overlapping days
I'm having trouble translate this expression . Could you help me? Thanks in advance
before trying to reimplement complex stuff I recommend you take a look at enhanced-period for Carbon
composer require cmixin/enhanced-period
CarbonPeriod::diff macro method is what I think you're looking for:
use Carbon\CarbonPeriod;
use Cmixin\EnhancedPeriod;
CarbonPeriod::mixin(EnhancedPeriod::class);
$a = CarbonPeriod::create('2018-01-01', '2018-01-31');
$b = CarbonPeriod::create('2018-02-10', '2018-02-20');
$c = CarbonPeriod::create('2018-02-11', '2018-03-31');
$current = CarbonPeriod::create('2018-01-20', '2018-03-15');
foreach ($current->diff($a, $b, $c) as $period) {
foreach ($period as $day) {
echo $day . "\n";
}
}
This will output all the days that are in $current but not in any of the other periods. (E.g. non-conflicting days)

Issue in Facebook Replies download from post comments

I am trying to download public comments and replies from the FACEBOOK public post by page.
my code is working until 5 Feb'18, Now it is showing below error for the "Replies".
Error in data.frame(from_id = json$from$id, from_name = json$from$name, :
arguments imply differing number of rows: 0, 1
Called from: data.frame(from_id = json$from$id, from_name = json$from$name,
message = ifelse(!is.null(json$message), json$message, NA),
created_time = json$created_time, likes_count = json$like_count,
comments_count = json$comment_count, id = json$id, stringsAsFactors = F)
please refer below code I am using.
data_fun=function(II,JJ,page,my_oauth){
test <- list()
test.reply<- list()
for (i in II:length(page$id)){
test[[i]] <- getPost(post=page$id[i], token = my_oauth,n= 100000, comments = TRUE, likes = FALSE)
if (nrow(test[[i]][["comments"]]) > 0) {
write.csv(test[[i]], file = paste0(page$from_name[2],"_comments_", i, ".csv"), row.names = F)
for (j in JJ:length(test[[i]]$comments$id)){
test.reply[[j]] <-getCommentReplies(comment_id=test[[i]]$comments$id[j],token=my_oauth,n = 100000, replies = TRUE,likes = FALSE)
if (nrow(test.reply[[j]][["replies"]]) > 0) {
write.csv(test.reply[[j]], file = paste0(page$from_name[2],"_replies_",i,"_and_", j, ".csv"), row.names = F)
}}}
}
Sys.sleep(10)}
Thanks For Your support In advance.
I had the very same problem as Facebook changed the api rules at the end of January. If you update your package with 'devtools' from Pablo Barbera's github, it should work for you.
I have amended my code (a little) and it works fine now for replies to comments.There is one frustrating thing though, is that Facebook dont appear to allow one to extract the user name. I have a pool of data already so I am now using that to train and predict gender.
If you have any questions and want to make contact - drop me an email at 'robert.chestnutt2#mail.dcu.ie'
By the way - it may not be an issue for you, but I have had challenges in the past writing the Rfacebook output to a csv. Saving output as an .RData file maintains the form a lot better

How to purge old content in firebase realtime database

I am using Firebase realtime database and overtime there is a lot of stale data in it and I have written a script to delete the stale content.
My Node structure looks something like this:
store
- {store_name}
- products
- {product_name}
- data
- {date} e.g. 01_Sep_2017
- some_event
Scale of the data
#Stores: ~110K
#Products: ~25
Context
I want to cleanup all the data which is like 30 months old. I tried the following approach :-
For each store, traverse all the products and for each date, delete the node
I ran ~30 threads/script instances and each thread is responsible for deleting a particular date of data in that month. The whole script is running for ~12 hours to delete a month data with above structure.
I have placed a limit/cap on the number of pending calls in each script and it is evident from logging that each script reaches the limit very quickly and speed of firing the delete call is much faster than speed of deletion So here firebase becomes a bottleneck.
Pretty evident that I am running purge script at client side and to gain performance script should be executed close to the data to save network round trip time.
Questions
Q1. How to delete firebase old nodes efficiently ?
Q2. Is there a way we can set a TTL on each node so that it cleans up automatically ?
Q3. I have confirmed from multiple nodes that data has been deleted from the nodes but firebase console is not showing decrease in data. I also tried to take backup of data and it still is showing some data which is not there when I checked the nodes manually. I want to know the reason behind this inconsistency.
Does firebase make soft deletions So when we take backups, data is actually there but is not visible via firebase sdk or firebase console because they can process soft deletes but backups don't ?
Q4. For the whole duration my script is running, I have a continuous rise in bandwidth section. With below script I am only firing delete calls and I am not reading any data still I see a consistency with database read. Have a look at this screenshot ?
Is this because of callbacks of deleted nodes ?
Code
var stores = [];
var storeIndex = 0;
var products = [];
var productIndex = -1;
const month = 'Oct';
const year = 2017;
if (process.argv.length < 3) {
console.log("Usage: node purge.js $beginDate $endDate i.e. node purge 1 2 | Exiting..");
process.exit();
}
var beginDate = process.argv[2];
var endDate = process.argv[3];
var numPendingCalls = 0;
const maxPendingCalls = 500;
/**
* Url Pattern: /store/{domain}/products/{product_name}/data/{date}
* date Pattern: 01_Jan_2017
*/
function deleteNode() {
var storeName = stores[storeIndex],
productName = products[productIndex],
date = (beginDate < 10 ? '0' + beginDate : beginDate) + '_' + month + '_' + year;
numPendingCalls++;
db.ref('store')
.child(storeName)
.child('products')
.child(productName)
.child('data')
.child(date)
.remove(function() {
numPendingCalls--;
});
}
function deleteData() {
productIndex++;
// When all products for a particular store are complete, start for the new store for given date
if (productIndex === products.length) {
if (storeIndex % 1000 === 0) {
console.log('Script: ' + beginDate, 'PendingCalls: ' + numPendingCalls, 'StoreIndex: ' + storeIndex, 'Store: ' + stores[storeIndex], 'Time: ' + (new Date()).toString());
}
productIndex = 0;
storeIndex++;
}
// When all stores have been completed, start deleting for next date
if (storeIndex === stores.length) {
console.log('Script: ' + beginDate, 'Successfully deleted data for date: ' + beginDate + '_' + month + '_' + year + '. Time: ' + (new Date()).toString());
beginDate++;
storeIndex = 0;
}
// When you have reached endDate, all data has been deleted call the original callback
if (beginDate > endDate) {
console.log('Script: ' + beginDate, 'Deletion script finished successfully at: ' + (new Date()).toString());
process.exit();
return;
}
deleteNode();
}
function init() {
console.log('Script: ' + beginDate, 'Deletion script started at: ' + (new Date()).toString());
getStoreNames(function() {
getProductNames(function() {
setInterval(function() {
if (numPendingCalls < maxPendingCalls) {
deleteData();
}
}, 0);
});
});
}
PS: This is not the exact structure I have but it is very similar to what we have (I have changed the node names and tried to make the example a realistic example)
Whether the deletes can be done more efficiently depends on how you now do them. Since you didn't share the minimal code that reproduces your current behavior it's hard to say how to improve it.
There is no support for a time-to-live property on documents. Typically developers do the clean-up in a administrative program/script that runs periodically. The more frequently you run the cleanup script, the less work it has to do, and thus the faster it will be.
Also see:
Delete firebase data older than 2 hours
How to delete firebase data after "n" days
Firebase actually deletes the data from disk when you tell it to. There is no way through the API to retrieve it, since it is really gone. But if you have a backup from a previous day, the data will of course still be there.

h2o steam prediction service results not being recognized as a BinaryPrediction for binomial estimator

I have a DRF model created in h2o flow that is supposed to be binomial and flow indicates that it is binomial
but I am having a problem where, importing it into h2o steam and deploying it to the prediction service, the model does not seem to be recognized as binomial. The reason I think this is true is shown below. The reason this is a problem is because I think it is what is causing the prediction service to NOT show the confidence value for the prediction (this reasoning is also shown below).
In the prediction service, I can get a prediction label, but no values filled in the index-label-probability table.
Using the browser inspector (google chrome), the prediction output seems to depend on a file called predict.js.
In order to get the prediction probability values to show in the prediction service, it seems like this block of code needs to run to get to this line. Opening the predict.js file within the inspector on the prediction service page and adding some debug output statements at some of the top lines (indicated with DEBUG/ENDDEBUG comments in the code below), my showResults() function then looks like:
function showResult(div, status, data) {
////////// DEBUG
console.log("showResult entered")
////////// ENDDEBUG
var result = '<legend>Model Predictions</legend>'
////////// DEBUG
console.log(data)
console.log(data.classProbabilities)
console.log("**showResult: isBinPred=" + isBinaryPrediction)
////////// ENDDEBUG
if (data.classProbabilities) {
////////// DEBUG
console.log("**showResult: data.classProbabilities not null")
////////// ENDDEBUG
// binomial and multinomial
var label = data.label;
var index = data.labelIndex;
var probs = data.classProbabilities;
var prob = probs[index];
result += '<p>Predicting <span class="labelHighlight">' + label + '</span>';
if (probs.length == 2) {
result += ' based on max F1 threshold </p>';
}
result += ' </p>';
result += '<table class="table" id="modelPredictions"> \
<thead> \
<tr> \
<th>Index</th> \
<th>Labels</th> \
<th>Probability</th> \
</tr> \
</thead> \
<tbody> \
';
if (isBinaryPrediction) {
var labelProbabilitiesMapping = [];
outputDomain.map(function(label, i) {
var labelProbMap = {};
labelProbMap.label = outputDomain[i];
labelProbMap.probability = probs[i];
if (i === index) {
labelProbMap.predicted = true;
}
labelProbMap.originalIndex = i;
labelProbabilitiesMapping.push(labelProbMap);
});
labelProbabilitiesMapping.sort(function(a, b) {
return b.probability - a.probability;
});
var limit = labelProbabilitiesMapping.length > 5 ? 5 : labelProbabilitiesMapping.length;
for (var i = 0; i < limit; i++) {
if (labelProbabilitiesMapping[i].predicted === true) {
result += '<tr class="rowHighlight">'
} else {
result += '<tr>'
}
result += '<td>' + labelProbabilitiesMapping[i].originalIndex + '</td><td>' + labelProbabilitiesMapping[i].label + '</td> <td>' + labelProbabilitiesMapping[i].probability.toFixed(4) + '</td></tr>';
}
} else {
for (var label_i in outputDomain) {
if (parseInt(label_i) === index ){
result += '<tr class="rowHighlight">'
} else {
result += '<tr>'
}
result += '<td>' + label_i + '</td><td>' + outputDomain[label_i] + '</td> <td>' + probs[label_i].toFixed(4) + '</td></tr>';
}
}
result += '</tbody></table>';
}
else if ("cluster" in data) {
// clustering result
result = "Cluster <b>" + data["cluster"] + "</b>";
}
else if ("value" in data) {
// regression result
result = "Value <b>" + data["value"] + "</b>";
}
else if ("dimensions" in data) {
// dimensionality reduction result
result = "Dimensions <b>" + data["dimensions"] + "</b>";
}
else {
result = "Can't parse result: " + data;
}
div.innerHTML = result;
}
and clicking the "predict" in the prediction service now generates the console output:
If I were to add a statement isBinaryPrediction = true to forcec the global variable to true (around here) and run the prediction again, the console shows:
indicating that the variable outputDomain is undefined. The variable outputDomain seems to be set in the function showModel. This function appears to run when the page loads, so I can't edit it in the chrome inspector to see what the variable values are. If anyone knows how to fix this problem (getting the prediction probability values to show up for h2o steam's prediction service for binomial models) it would a big help. Thanks :)
The UI has not been updated to handle MOJOs yet and there seems to be a bug. You're welcome to contribute: https://github.com/h2oai/steam/blob/master/CONTRIBUTING.md
My solution is very hacky, but works for my particular case (ie. I have a DRF, binomial model in h2o steam that is not being recognized as a binary model (how I know this is shown in this answer)).
Solution:
In my original post, there was a variable outputDomain that was undefined. Looking at the source code, that variable is set to (what is supposed to be) the domain labels of the output response for the model, here. I changed this line from outputDomain = domains[i1]; to outputDomain = domains[i1-1];. My output after clicking the predict button looks like:
From the official linux download for h2o steam, you can access the prediction service predict.js file by opening steam-1.1.6-linux-amd64/var/master/assets/ROOT.war/extra/predict.js, then saving changes and relaunching the jetty server $ java -Xmx6g -jar var/master/assets/jetty-runner.jar var/master/assets/ROOT.war.
Causes?:
I suspect the problem has something to do with that fact that the global variable isBinaryPrediction in predict.js seems to remain false for my model. The reason that isBinaryPrediction is false seems to be because in the function showInputParameters(), data.m has no field _problem_type. Using console.dir(data, {depth: null}) in the inspector console to see the fields of data.m, I see that the expectedd field data.m._problem_type does not exist and so returns undefined, thus isBinaryPrediction is never set true (here).
Why this is happening, I do not know. I have only used DRF models in steam so far and this may be a problem with that model, but I have not tested. If anyone knows why this may be happening, please let me know.

Resources