I'm looking for a watch or strap that collects data such as heartrate and GPS continuously. My goal is to then export this data for further analysis (analysis similar to what the Whoop does.)
I've considered a few things:
WearOS devices. I'm sure I can write or download app that collects this kind of data (maybe Cardiogram), but I want to make sure it's as granular as possible. I'm worried that if I background the app, it won't have full access to data.
Strap devices such as the Mi Fit. This seems to collect data 24/7 (and has long battery life), but I'm not sure what the granularity of the data is. I haven't found any sample data for download.
Thank you in advance!
Within Fitbit dashboard, you will be able to get full access to all your own data. You may try it now by signing up an account and checking the following in your account:
The granularity of the data is minute by minute and you will only be able to access your own data. In order to collect a database of other participants for your research, you will have to either ask the participants to extract their own data and send a copy to you or develop an application or website to directly access their Daily data once they authorize you to do so (Minute by minute data from others is only available case by case and upon request).
Related
I am working on an application that is socially-based. It mainly deals with creating schedules for a specific activity and inviting other users to join on a specific date and time. However, the application is using Parse as its backend. Almost every action taken requires some networking, and it can feel slow on a low-quality connection.
How can I make the application seem more responsive?
When data is retrieved, say a list of current friends, do I store those friends in Core Data, and then update my Core Data model by adding or removing friends during a refresh?
Should I use some sort of caching tool, such as NSCache?
Do I just present a loader on the screen while the the network requests are running and then update the UI when finished?
I see that applications such as Facebook and Instagram present a few posts when the applications are launched while they go fetch the latest data, but I'm not sure as to how they accomplish this.
Can you provide me with an appropriate direction to take?
Keeping some kind of offline cache is useful, since it means you'll have some data to display immediately even if you're still loading new data from a network connection. Core Data can be useful for this, but it really depends on the nature of the data and on how your app uses it. NSCache probably isn't what you want, since it only keeps data in memory and doesn't save it for the next time your app runs.
Apps that need frequent network updates (like Facebook) commonly make use of iOS's background fetch system. If you opt in to this, iOS will launch your app in the background from time to time to allow it to get new data. If your app saves that data somewhere on the device, it's already present when the user taps your app icon.
Using background fetch is pretty easy-- you add the appropriate key to Info.plist, and then implement application(application:, performFetchWithCompletionHandler:) in your app delegate.
I am working on a family networking app for Android that enables family members to share their location and track location of others simultaneously. You can suppose that this app is similar with Life360 or Sygic Family Locator. At first, I determined to use a MBaaS and then I completed its coding by using Parse. However, I realized that although a user read and write geolocation data per minute (of course, in some cases geolocation data is sent less frequently), the request traffic exceeds my forward-looking expectations. For this reason, I want to develop a well-grounded system but I have some doubts about whether Parse can still do its duty if number of users increases to 100-500k.
Considering all these, I am looking for an alternative method/service to set such a system. I think using a backend service like Parse is a moderate solution but not the best one. What are the possible ways to achieve this from bad to good? To exemplify, one of my friends say that I can use Sinch which is an instant messaging service in background between users that set the price considering number of active users. Nevertheless, it sounds weird to me, I have never seen such a usage of an instant messaging service as he said.
Your comments and suggestions will be highly appreciated. Thank you in advance.
Well sinch wouldn't handle location updates or storing of location data, that would be parse you are asking about.
And since you implied that the requests would be to much for your username maybe I wrongly assumed price was the problem with parse.
But to answer your question about sending location data I would probably throttle it if I where you to aile or so. No need for family members to know down to the feet in realtime. if there is a need for that I would probably inement a request method instead and ask the user for location when someone is interested.
We are considering to use parse.com as our database back end and we are currently looking for a Java SDK for parse. As far as I can tell, there are two, one is Almonds (https://bitbucket.org/jskrepnek/almonds) and the other is the official Android SDK from Parse (https://parse.com/downloads/android/Parse/latest).
We are planning to make calls out to Parse from a Java based server (Jetty) and we do not have an Android app or plan to have one in foreseeable future.
I am leaning towards the Android SDK since it's the official one. However, my primary concern is its performance in a multi-threaded environment when used by a Jetty server which potentially could be initiating many requests to Parse at the same time for the same or different sets of data.
My other alternative is obviously to use their REST API and write my own utilities to encapsulate the functions. I would highly appreciate if anyone has experience with this and can share with us. Thanks!
I write this in January, 2014. Parse.com is rapidly growing and expanding their platform. I cannot say how long this information will be correct or how long my observations will remain relevant.
That said...
Number one. Parse.com charges by the number of transactions. Many small transactions can result in a higher cost to the app owner. We are using Parse.com's Pro Plan. The Pro Plan has these limits:
15 million HTTP requests per month
Burst limit of 40 requests per second
If you have 4,500 users, each sending 125 HTTP requests to Parse.com per day, then you are already looking at 16,850,000 requests every 30 days. Parse.com also offers a higher level of service called Parse Enterprise. Details about this plan are not published.
Second, Parse.com's intended purpose is to be a light-weight back-end back-end for mobile apps. I believe Parse.com is a very good mobile backend-as-a-service (MBaaS - link to a Forrester article on the subject).
I am building a server-side application using Parse.com. I use the REST interface, Cloud Functions, and Cloud Jobs. In my opinion, Parse.com is a clumsy application server. It does not expose powerful tools to manipulate data. For example, the only way to drop a table is by clicking a button in Parse's Web Data Browser. Another example is that Parse sets the type of an attribute when an object is first saved. If data type is changed in an object, say from string to pointer, Parse.com will refuse to save the object.
The Cloud Function programming model is build on Node.js. Complex business logic will quickly land you in callback hell because all database queries and save operations are asynchronous. That is, when you save or query an object, you hand Parse a function and say "when the save/query is complete, run this function". This might come naturally to LISP programmers, but not to OO programmers raised on Java or .Net. Be aware of this if you intend to write Cloud Code for your application. My productivity took a nose dive when I started writing Cloud Functions.
The biggest challenge I experience with Parse.com is round-trip-time. Here are some informal benchmarks:
Getting a single object via the REST API has pretty consistent RTT of 800ms
GET https://api.parse.comapi.parse.com/1/classes/Element/xE5sZCQd6D
Response: Status=200, Round trip time=0.846
ICMP is blocked, but just knocking on the door takes 400-800 ms, depending on the day.
GET https://api.parse.comapi.parse.com/1
Status=404, Round trip time=0.579
Parse.com is in Amazon's data center in Northern Virginia. I used Ookla's Speedtest to estimate my latency to that area. Reaching the Richmond Business Center server (75.103.15.244) in Ashburn gives me a ping time of 95ms. A server in D.C. gave me a ping time of 97 ms. Two hundred milliseconds of Internet overhead is not the problem.
The more queries or save operations a Cloud Function performs, the longer response time. Cloud Functions with one or two queries or save operations have an RTT between 1 and 3 seconds. Cloud Functions with multiple queries and save operations have an RTT between 3 and 10 seconds.
HTTP requests sent to Parse.com time-out after 15 seconds. I have a Cloud Function I use for testing that deletes all objects in the database. This Cloud Function can delete a couple hundred rows before timing out. I converted the Cloud Function into a Cloud Job because jobs can run for up to 15 minutes. The job deletes 400-500 objects and takes 30-60 seconds to complete. Job status is available only through the Web browser. I had to create a light-weight job status system so other devs could query the status of their jobs.
Parse's best use case is the iPhone developer who wrote a game and needs to store the user's high scores, but knows nothing about servers. Use Parse where it is strong.
We are building a mobile app with a rails CMS to manage it.
What our app look like?
Every admin user of the app can set one private channel with very small amount of data -
About 50 short strings.
Users can then download the app and register few different channels and fetch the data from the server to their devices. The data will be stored locally and will not be fetched again unless the admin user will update the data (but we assume that it won't happen so often). Every channel will be available to not more then 500 devices.
The users can contribute to the channel but this data will be stored on S3 and not on the database.
2 important points:
Most of the channels will be active for 5 months and not for 500 users +-. But most of the activity will happen on the same couple of days.
Every channel is for small amout of users (500) But we hope :) to get to hundreds of thousens of admin users.
Building the CMS with rails we saw that using SimpleDB is more strait-forward then using DynamoDB. But, as we are not server experts, we saw the limitations of SimpleDB and we don't know if SimpleDB could handle the amount of data transfer that we will have (if our app will succeed). another important point is that DynamoDb costs are much higher and not depended on the use while SimpleDb will be much cheaper at the beginning.
The question is:
Does simpleDB can feet our needs?
Could we migrate later to dynamoDB if our service will grow in the future ?
Starting out with a new project and not really knowing what to expect from the usage i'd say that the better option is to go with SimpleDB. It doesn't sound like your usage is going to be very high SimpleDB should be able to handle that no problem. The real power of dynamoDB comes in when you really have a lot of load. You don't fall into that category it seems.
If you design your application correctly switching between SimpleDB and DynamoDB should be a simple task if you decide at some point that SimlpeDB is not working out. I do these kind of switches all the time with other components in my software. Since both databases are NoSQL you shouldn't have a problem converting between the two. Just make sure that any any features you use in SimpleDB are available in DynamoDB. Make sure to design your database design for both DynamoDB has stricter requirements using indexes make sure that the two will be compatible.
That being said. Plenty of people have been using SimpleDB for their applications and I don't expect that you would see any performance problems unless your product really takes off, at which time you can invest in resources to move to DynamoDB.
Aside from all that we have the pricing, like you already mentioned. SimpleDB is the obvious solution for your use case.
I was wondering if there is a tool to keep track of application performance. What I have in mind is a tool that will listen for updates and register performance metrics published by an application. i.e. time to serve a request, time a certain operation took to finish. And this tool would then aggregate the data and measure performance trends.
If you want to measure your application from outside, then you can use RRDtool to collect the data.
You can use slamd for webapp written in Java.
For Django use hotshot.
Search for profiler + your language, framework
Take a look at HP SiteScope. It's ability to drive the system with a Web User Script, to monitor the metrics on the backend, even to the extent of creation of custom shell scripts and database queries, plus the ability to add logic for report/alert against these combined data sets appears to be what you need.
Other mechanisms that you might consider would be a roll your own service using CURL to push information in, queries to the systems involved to pull metrics or database information and then your own interface for alerting and reporting.
Then it becomes a cost question, can you roll the level of functionality for less money than you can purchase an already existing solution on the open market.
Ref:
HP SiteScope Wiki Page