Is GPS inaccuracy consistent over short time spans? - algorithm

I'm interested in developing a semi-autonomous RC lawnmower.
That is, the operator would decide when to stop, turn, etc., but could request "slightly overlap previous cut" and the mower would automatically do so. (Having operated high-end RC mowers at trade shows, this is the tedious part. Overcoming that, plus the high cost -- which I believe is possible -- would make a commercial success.)
This feature would require accurate horizontal positioning. I have investigated ultrasonic, laser, optical, and GPS. Each has its problems in this application. (I'll resist the temptation to go off on these tangents here.)
So... my question...
I know GPS horizontal accuracy is only 3-4m. Not good enough, but:
I don't need to know where I am on the planet. I only need to know where I am relative to where I was a minute ago.
So, my question is, is the inaccuracy consistent in the short term? if so, I think it would work for me. If it varies wildly by +- 1.5m from one second to the next, then it will not work.
I have tried to find this information but have had no success (possibly because of the ubiquity of other GPS-accuracy discussion), so I appreciate any guidance.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Edit ~~~~~~~~~~~~~~~~~~~~~~
It's looking to me like GPS is not just skewed but granular. I'd be interested in hearing from anyone who can give better insight into this, but for now I'm going to explore other options.
I realized that even though my intended application is "outdoor", this question is technically in the field of "indoor positioning systems" so I am adding that tag.
My latest thinking is to have 3 "intelligent" high-dB ultrasonic (US) speaker units. The mower emits RF requests for a tone from each speaker in rapid sequence, measuring the time it takes to "hear" each unit's response, thereby calculating distance to each of these fixed point and using trilateration to get position. if the fixed-point speakers are 300' away from the mower, the mower may have moved several feet between the 1st and 3rd response, so this would have to be allowed for in the software. If it is possible to differentiate 3 different US frequencies, they could be requested/received "simultaneously". Though you still run into issues when you're close to one fixed unit and far from another. So some software correction may still be necessary. If we can assume the mower is moving in a straight line, this isn't too complicated.
Another variation is the mower does not request the tones. The fixed units send RF "here comes tone from unit A" etc., and the mower unit just monitors both RF info and US tones. This may simplify things somewhat, but it seems it really requires the ability to determine which speaker a tone is coming from.

This seems like the kind of thing you could (and should) measure empirically. Just set a GPS of your liking down in the middle of a field on a clear day and wait an hour. Then come back and see what you find.
Because I'm in a city, I can't run out and do this for you. However, I found a paper entitled iGeoTrans – A novel iOS application for GPS positioning in geosciences.
That includes this figure which duplicates the test I propose. You'll note that both the iPhone4 and Garmin eTrex10 perform pretty poorly versus the accuracy you say you need.
But the authors do some Math Magic™ to reduce the uncertainty in the position, presumably by using some kind of averaging. That gets them to a 3.53m RMSE measure.
If you have real-time differential GPS, you can do better. But this requires relatively expensive hardware and software.
Even aside from the above, you have the potential issue of GPS reflection and multipath error. What if your mower has to go under a deck, or thick trees, or near the wall of a house? These common yard features will likely break the assumptions needed to make a good averaging algorithm work and even frustrate attempts at DGPS by blocking critical signals.
To my mind, this seems like a computer vision problem. And not just because that'll give you more accurate row overlaps... you definitely don't want to run over a dog!

In my opinion a standard GPS is no way accurate enough for this application. A typical consumer grade receiver that I have used has a position accuracy defined as a CEP of 2.5 metres. This means that for a stationary receiver in a "perfect" sky view environment over time 50% of the position fixes will lie within a circle with a radius of 2.5 metres. If you look at the position that the receiver reports it appears to wander at random around the true position sometimes moving a number of metres away from its true location. When I have monitored the position data from a number of stationary units that I have used they could appear to be moving at speeds of up to 0.5 metres per second. In your application this would mean that the lawnmower could be out of position by some not insignificant distance (with disastrous consequences for your prized flowerbeds).
There is a way that this can be done, as has been proved by the tractor manufacturers who can position the seed drills and agricultural sprayers to millimetre accuracy. These systems use Differential GPS where there is a fixed reference station positioned in the neighbourhood of the tractor being controlled. This reference station transmits error corrections to the mobile unit allowing it to correct its reported position to a high degree of accuracy. Unfortunately this sort of positioning system is very expensive.

Related

Techniques to evaluate the "twistiness" of a road in Google Maps?

As per the title. I want to, given a Google maps URL, generate a twistiness rating based on how windy the roads are. Are there any techniques available I can look into?
What do I mean by twistiness? Well I'm not sure exactly. I suppose it's characterized by a high turn -to-distance ratio, as well as high angle-change-per-turn number. I'd also say that elevation change of a road comes in to it as well.
I think that once you know exactly what you want to measure, the implementation is quite straightforward.
I can think of several measurements:
the ratio of the road length to the distance between start and end (this would make a long single curve "twisty", so it is most likely not the complete answer)
the number of inflection points per unit length (this would make an almost straight road with a lot of little swaying "twisty", so it is most likely not the complete answer)
These two could be combined by multiplication, so that you would have:
road-length * inflection-points
--------------------------------------
start-end-distance * road-length
You can see that this can be shortened to "inflection-points per start-end-distance", which does seem like a good indicator for "twistiness" to me.
As for taking elevation into account, I think that making the whole calculation in three dimensions is enough for a first attempt.
You might want to handle left-right inflections separately from up-down inflections, though, in order to make it possible to scale the elevation inflections by some factor.
Try http://www.hardingconsultants.co.nz/transportationconference2007/images/Presentations/Technical%20Conference/L1%20Megan%20Fowler%20Canterbury%20University.pdf as a starting point.
I'd assume that you'd have to somehow capture the road centreline from Google Maps as a vectorised dataset & analyse using GIS software to do what you describe. Maybe do a screen grab then a raster-to-vector conversion to start with.
Cumulative turn angle per Km is a commonly-used measure in road assessment. Vertex density is also useful. Note that these measures depend upon an assumption that vertices have been placed at some form of equal density along the line length whilst they were captured, rather than being manually placed. Running a GIS tool such as a "bendsimplify" algorithm on the line should solve this. I have written scripts in Python for ArcGIS 10 to define these measures if anyone wants them.
Sinuosity is sometimes used for measuring bends in rivers - see the help pages for Hawths Tools for ArcGIS for a good description. It could be misleading for roads that have major
changes in course along their length though.

Accurate parallel swathing algorithm for (GPS) guidance needed

I wrote a delphi program generating a gpx file as input for a "poor man's guidance system" for aerial spray by means of ultralight plane.
By and large, it produces route (parallel swaths) using gpx file as output.
The route's engine is based on the "Vincenty" algorithm which works fine for any wgs84 computation but
I can't get the accuracy of grid generated by ExpertGPS of Topografix (requirement).
I assume a 2D computation on the ellipsoïd :
1) From the start rtept (route point), compute the next rtept given a bearing and an arbitrary distance (swath length).
2) Compute the next rtept respective respective to previous bearing (90° turn) and another arbitrary distance (swath distance).
3) Redo 1) with the last rtept as starting point but in the opposite direction, and so on.
What's wrong with it ?
You do not describe your Pascal implementation of Vincenty's earth ellipsoid model so the following is speculation:
The model makes use of numerous geometrical trig functions-- ATAN2,
COS, SIN etc. Depending whether you use internal Delphi functions
or your own versions, there is the possibility of lack of precision
in calculations. The precision in the value of pi used in your
calculations could affect the precision you require.
Floating point arithmetic can cause decimal place errors. It will
make a difference whether you use single, double or real. I
believe some of the internal Delphi functions have changed with
different versions so possibly the version of Delphi you are
using will affect how the internal function is implemented.
If implemented accurately, Vincenty’s formula is supposed to be
accurate to within 0.5mm. Amazing accuracy. If there are rounding
errors or lack of precision in your Delphi implemention, the positional
errors can be significantly larger.
Consider the accuracy of your GPS information. Depending on how
many satellites are being used by the GPS receiver at any one time,
the accuracy of the positional information changes. Errors on
the order of 50 feet or more is possible. Additionally, the refresh
of positional information on the GPS receiver is not necessarily
instantaneous; therefore if the swath 'turns' occur rapidly, you
will have to ensure the GPS has updated at the turning point.
Your procedure to calculate the pattern seems reasonable so look
at your implementation of Vincenty's algorithm in your Delphi code.
This list is not exhaustive, I imagine others can improve it
dramatically. What I mention is based on my experience with GPS and
various versions of Delphi and what I could recall off the top of my head.
Something you might try is compare your calculations of
distance/bearing using your implementation of the algorithm with
examples provided on the Internet. There are several online
calculators. If you have not been there, the Aviation Formulary
is an excellent place to find examples of other navigational tricks.
http://williams.best.vwh.net/avform.htm . A comparison will
allow you to gain confidence in the precision of the Delphi
implementation of Vincenty's algorithm with data calculated by
mathematicians. Simply, your implementation of Vincenty may not be
precise. Then again, the error may be elsewhere.
I am doing farm GPS guidance similar for ground rig just with Android. Great for second tractor to help follow previous A B tracks especially when they disappear for a bit .
GPS accuracy repeat ability from one day to next will give larger distance. Expensive system's use dGPS2cm-10cm.5-30metres different without dGPS. Simple solution is recalibrate at known location. Cheaper light bars use this method.
Drift As above except relates to movement during job. Mostly unnoticeable <20cm 3hrs. Can jump 1-2metres rarely. I think when satellite connect or disconnect. Again recalibrate regularly at known coordinates ,i.e. spray fill point
GPS accuracy. Most phone update speed 1hz. 3? seconds between fixes at say 50km/hr , 41.66m between fixes. On ground rig 18km hrs but will be tracks after first run. Try a Bluetooth GPS 10hz check update speed and as mentioned fast turns a problem.
Accuracy of inputs and whether your guidance uses dGPS will make huge difference.
Once you are off your line say 5 metres at 100metres till next point, then at 50 metres your still 2.5 metres off unless your guidance takes you back to the route not the next coordinates.
I am not using Vincenty as I can 'bump'back onto line manually and over 1km across difference <30cm according to only reference I saw however I am taking 2 points and create parrallel points across.
Hope these ideas help your situation.

How to get volume from mic input on WP7 [duplicate]

Given two byte arrays of data captured from a microphone, how can I determine which one has more spikes in noise? I would assume there is an algorithm I can apply to the data, but I have no idea where to start.
Getting down to it, I need to be able to determine when a baby is crying vs ambient noise in the room.
If it helps, I am using the Microsoft.Xna.Framework.Audio.Microphone class to capture the sound.
you can convert each sample (normalised to a range 1.0 to -1.0) into a decibel rating by applying the formula
dB = 20 * log-base-10 (sample-value)
To be honest, so long as you don't mind the occasional false positive, and your microphone is set up OK, you should have no problem telling the difference between a baby crying and ambient background noise, without going through the hassle of doing an FFT.
I'd recommend you having a look at the source code for a noise gate, which does pretty much what you are after, with configurable attack times & thresholds.
First use a Fast Fourier Transform to transform the signal into the frequency domain.
Then check if the signal in the typical "cry-frequencies" is significantly higher than the other amplitudes.
The preprocessor of the speex codec supports noise vs signal detection, but I don't know if you can get it to work with XNA.
Or if you really want some kind of loudness calculate the sum of squares of the amplitudes from the frequencies you're interested in (for example 50-20000Hz) and if the average of that over the last 30 seconds is significantly higher than the average over the last 10 minutes or exceeds a certain absolute threshold sound the alarm.
Louder at what point? The signal's average amplitude will tell you which one is louder on average, but that is kind of a dumb, brute force way to go about it. It may work for you in practice though.
Getting down to it, I need to be able to determine when a baby is crying vs ambient noise in the room.
Ok, so, I'm just throwing out ideas here; I am by no means an expert on audio processing.
If you know your input, i.e., a baby crying (relatively loud with a high pitch) versus ambient noise (relatively quiet), you should be able to analyze the signal in terms of pitch (frequency) and amplitude (loudness). Of course, if during he recording someone drops some pots and pans onto the kitchen floor, that will be tough to discern.
As a first pass I would simply traverse the signal, maintaining a standard deviation of pitch and amplitude throughout, and then set a flag when those deviations jump beyond some threshold that you will have to define. When they come back down you may be able to safely assume that you captured the baby's cry.
Again, just throwing you an idea here. You will have to see how it works in practice with actual data.
I agree with #Ed Swangren, it will take a lot of playing with samples of data for a lot of sources. To me, it sounds like the trick will be to limit or hopefully eliminate false positives. My experience with babies is they are much louder crying than the environment. so, keeping track of the average measurements (freq/amp/??) of the normal environment and then classifying how well the changes match the characteristics of a crying baby which changes from kid to kid, so you'll probably want a system that 'learns'. Best of luck.
update: you might find this library useful http://naudio.codeplex.com/

What is the shortest perceivable application response delay?

A delay will always occur between a user action and an application response.
It is well known that the lower the response delay, the greater the feeling of the application responding instantaneously. It is also commonly known that a delay of up to 100ms is generally not perceivable. But what about a delay of 110ms?
What is the shortest application response delay that can be perceived?
I'm interested in any solid evidence, general thoughts and opinions.
The 100 ms threshold was established over 30 yrs ago. See:
Card, S. K., Robertson, G. G., and Mackinlay, J. D. (1991). The information visualizer: An information workspace. Proc. ACM CHI'91 Conf. (New Orleans, LA, 28 April-2 May), 181-188.
Miller, R. B. (1968). Response time in man-computer conversational transactions. Proc. AFIPS Fall Joint Computer Conference Vol. 33, 267-277.
Myers, B. A. (1985). The importance of percent-done progress indicators for computer-human interfaces. Proc. ACM CHI'85 Conf. (San Francisco, CA, 14-18 April), 11-17.
What I remember learning was that any latency of more than 1/10th of a second (100ms) for the appearance of letters after typing them begins to negatively impact productivity (you instinctively slow down, less sure you have typed correctly, for example), but that below that level of latency productivity is essentially flat.
Given that description, it's possible that a latency of less than 100ms might be perceivable as not being instantaneous (for example, trained baseball umpires can probably resolve the order of two events even closer together than 100ms), but it is fast enough to be considered an immediate response for feedback, as far as effects on productivity. A latency of 100ms and greater is definitely perceivable, even if it's still reasonably fast.
That's for visual feedback that a specific input has been received. Then there'd be a standard of responsiveness in a requested operation. If you click on a form button, getting visual feedback of that click (eg. the button displays a "depressed" look) within 100ms is still ideal, but after that you expect something else to happen. If nothing happens within a second or two, as others have said, you really wonder if it took the click or ignored it, thus the standard of displaying some sort of "working..." indicator when an operation might take more than a second before showing a clear effect (eg. waiting for a new window to pop up).
New research as of January, 2014:
http://newsoffice.mit.edu/2014/in-the-blink-of-an-eye-0116
...a team of neuroscientists from MIT has found that the human brain
can process entire images that the eye sees for as little as 13
milliseconds...That speed is far faster than the 100 milliseconds
suggested by previous studies...
At the San Francisco Opera house, we routinely setup precise delay setting for each of our speakers. We can detect 5 millisecond changes in delay times to our speakers. When you make such subtle changes, you change where the sound sources from. Often times we want sound to sound as if it's coming from someplace other than were the speakers are. Precise delay adjustments make this possible. Sound delays of 15 milliseconds are very obvious even to untrained ears because it radically shifts where the sound sources from. A simple test is to prove this is to play sound through multiple speakers, and have the subject close their eyes and point to where the sound is coming from. Now make a slight change in the delay time to one of the speakers of just a few milliseconds, and have the person point again to where the sound is coming from. Making changes in delay times is acoustically very similar to moving the actual speakers.
I don't think anecdotes or opinions are really valid for answers here. This question touches on the psychology of user experience and the sub-conscious mind. The human brain is powerful and fast and mere milliseconds do count and are registered. I am no expert but I know there is much science behind e.g. what Matt Jacobsen mentioned. Check out Google's study here http://services.google.com/fh/files/blogs/google_delayexp.pdf for an idea of how much it can affect site traffic.
Here's another study by Akami - 2 second response time
http://www.akamai.com/html/about/press/releases/2009/press_091409.html (From https://ux.stackexchange.com/questions/5529/once-apon-a-time-there-was-a-10-seconds-to-load-a-page-rule-what-is-it-nowa )
Does anyone have any other studies to share?
Persistence of vision is around 100ms so it should be a reasonable visual feedback delay. 110ms should make no difference, as it is an approximate value. In practice you won't notice a delay below 200ms.
Out of my memory, studies have shown that users lose patience and retry an operation after around 2s of inactivity (in the absence of feedback), e.g. clicking on a confirm or action button. So plan on using some kind of animation if the action takes longer than 1s.
I worked on an application that had a explicit business goal of being blindingly fast, and we had a max allowed server time of 150ms for processing a full web page.
No solid evidence but for our own application, we allow a maximum of one second between a user action and feedback. If it does take longer, a "waiting box" should be shown.
A user should see "something" happening within a second of causing an action.
Use the dual of test for visual spatial resolution ( two parallel black bars, with an equal width, and an equal gap between them. Reduce angular subtense until they appear to be one line, ie scale down or simply move away. The point at which it seems to merge into one line shows the threshold).
Use function gen to blink an LED on for an interval, then off, then on, then off --- same time delay each interval, but repeat the pattern while gradually decreasing that delay, thus same as above, but time in place of space.
Imagine an oscilloscope image like so:
_________/^d^\_d_/^d^\_________
I note that at 41 ms interval, I perceive one longer blink only, but at 42 ms, I just perceive it as extremely rapid double blink. Thus, threshold is ~42ms. Probably varies depending on person, age, condition etc.
This is close to 24 fps, which is probably why cinema works at that presentation rate.
Reaction time to see something, then decide to react, say by clicking mouse etc, is longer much longer again. Thus, it's not surprising that experiments requiring a reaction response to measure yield a longer time, but that longer delay wasn't what you were asking for, and the above experiment is easy and illuminating!
But note also -- smoothly moving animations require the visual cortex to work harder, delaying visual comprehension. This delay is 'hidden' from perception, so longer delays (several hundred ms) can be 'hidden' by just providing something thats difficult to see because moving.
The effect that hides it is called Chronostasis. Basically, glancing somewhere 'new' requires the visual cortex to work harder to 'de-render' / 'recognise' the scene. This takes a remarkably long time, during which your consciousness is essentially 'paused'.
Once looking at a mostly-constant scene, only changes need this processing, so smaller/faster changes are possible and your perceptual experience resumes, and faster/smaller movements are detectable.
The detection of changes visually is processed basically on your retina. Your eyes also have a natural 'bandpass' response -- stare unblinkingly at anything for sufficient time, and at sufficient distance for saccades to be unable to change the image much, and you will find your visual feed fading out to 'grey'. This is what gives us our 'white balance', and is somewhat similar to the automatic gain control on analogue radio/tv.
The point is, that your eyes themselves have a time constant to respond, but this is actually dependant on the strength of the stimulus. (brightness of the LED, for our case).
Too bright, and the ability of your retinal cells to 'relax' back from the brightness, ie, respond to the 'sudden dark', is compromised.
The effect which keeps you seeing bright things after the light has stopped is called 'persistence of vision', and old cathode-ray picture tubes more or less depend heavily on it for them to work at all.
This is the one that's usually 100 ms or so, but it's not a 'sharp' interval -- more like a exponential roll-off, and again -- changes duration depending on how bright the stimulus is relative to how dark-adjusted (ie, sensitive) the eye is at that moment.
For duller, faster changes, especially changes outside your fovea, you will perceive even higher rates easily. Eg, flickering lights. Those outer parts of your retina (most of the area, actually) are adapted to detecting movement, and bringing it to your attention. So it makes sense that although lacking spatial resolution, they have greater time resolution / shorter response rate.
But this also means animating things usually requires even finer time steps, otherwise 'jumpiness' is perceptible, mostly due to that faster response.
Note all the scaling/sliding full screen animations iOS uses -- these essentially exploit chronostasis to hide technically unavoidable loading delays, giving the perception that those products respond instantly and smoothly at all times.
So, show something different within 42 ms -> instant response.
Keep animating otherwise useless hard-to-see-properly visuals continuously at high frame rates, then stop suddenly when done -> hides the delay so long as enough is visually busy, and the delay isn't too long. (probably 250ms is pushing the friendship).
This also seems to tee up with other's perceptions of input lag, for example : http://danluu.com/input-lag/
100ms is totally wrong. You can prove this yourself using your fingers, a desk, and a watch with visible seconds. Synchronising to the watch's seconds, drum out beats on the desk continuously such that 16 beats are drummed out every second. I chose 16 because it is natural to drum out multiples of two, so it's like four strong beats with three weak beats in between. Adjacent beats are clearly discernible by their sound. The beats are separated by about 60ms, so even 60 ms is actually still too high. Therefore the threshold is way below 100ms, especially if sound is involved.
For instance, a drum app or a keyboard app needs a delay of more like 30ms, or else it gets really annoying, because you hear the sound coming from the physical button / pad / key well before the sound comes out of the speakers. Software like ASIO and jack were made specifically to deal with this issue, so no excuses. If your drum app has a 100ms delay, I will hate you.
The situation for VoIP and high powered gaming is actually worse, because you need to react to events in real time, and in music, at least you get to plan ahead at least a little. For an average human reaction time of 200ms, a further 100ms delay is an enormous penalty. It noticeably changes the conversational flow of VoIP. In gaming, 200ms reaction time is generous, especially if the players have a lot of practice.
For a reasonably current scholarly article, try out How Much Faster is Fast Enough? User Perception of
Latency & Latency Improvements in Direct and Indirect Touch (PDF). While the main focus was on JND (Just Noticeable Difference) of delay, there is some good background on on absolute delay perception and they also acknowledge and account for 60Hz monitors (16.7 ms repaint times) in their second experiment.
I am a cognitive neuroscientist who studies visual perception and cognition.
The paper by Mary Potter mentioned above regards the minimum time required to categorize a visual stimulus. However, understand that this is under laboratory conditions in the absence of any other visual stimuli, which certainly would not be the case in the real world user experience.
The typical benchmark for a stimulus-response / input-stimulus interaction, that is, the average amount of time for an individuals minimum reaction speed or input-response detection is ~200ms. to be certain there is no detectable difference, this threshold could be lowered to around 100ms. Below this threshold, the temporal dynamics of your cognitive processes take longer to compute the event than the event itself, so there is nearly no chance of any ability to detect or differentiate it. You could go lower to say 50 ms, but it really wouldn't be necessary. 10 ms and you've gone into the territory of overkill.
For web applications 200ms is considered as unnoticable delay, while 500ms is acceptable.

Predictive "blood glucose" algorithm?

I'm writing an app that lets a diabetic user enter his/her "blood glucose" readings, and then charts them on a graph over time from left to right. Since the blood readings will be done only several times a day, an algorithm would be handy to:
a) fill in the gaps on the graph between readings (curves would be more realistic than jerky lines) and allow a more accurate "blood glucose level" daily average
b) roughly predict what will happen in the future (if the user eats nothing that will affect his blood levels)
I suck at calculus. I'm hoping someone here knows a library for this stuff? I'm hoping someone knows of an algorithm that has been tailored for this specific problem already (e.g.: where someone has compared it to real data from diabetics)
Disclaimer: I am very aware that any such algorithm would vary wildly depending on the user. I'm just looking to improve on straight angular lines. Regardless of the diabetic, there is a limit to the rate that blood sugars can rise and fall.
I'm using Javascript, but as it's just math, I could port it from C, Java or whatever.
Blood sugar behavior is very complicated. It is affected by
Current blood sugar (complicated by the possible presence of ketones if the patient is hyperglycemic)
recent food out to several hours depending on the type and how much
recent fast acting insulin (with variety and patient dependent reaction profiles between 45 minutes and two hours long. Oh, and delivery mechanism)
long-acting insulin out past 12 hours (again patient and variety dependent)
activity levels
stress levels
illness
basal insulin rate if the patient wears a pump
ad nauseum
Very hard problem. Any heuristic---any heuristic---you chose would be highly misleading. So short answer:
Don't do it.
This comes, in part, from having compared a diabetic's 24-hour continous glucose log with the ~10 finger pricks taken during the same time. I.e. my suggestion is data driven.
Edit: Evidently I didn't make myself clear.
You can't even get close.
Nothing you can do with finger prick data can be remotely reliable.
Connecting the dots with any lines (even straight segments) is just plain wrong. It doesn't reflect reality. Not even a little bit.
I'm an experimental particle physicist. Complicated data sets are what I do. There is a diabetic in my life (did you guess?). This matters to me.
But I've seen the high frequency data logs, side-by-side with a log of the days finger-pricks, exercise, food, and insulin.
If you could get every-fifteen-minutes data, I'd say go ahead and use a spline. It won't be dangerously misleading. But, if you have 6-10 measurements across the day, you know nothing.
Good news: continuous monitoring is coming down in price. It's out of the lab and available with some pumps even now.
For those who aren't familiar with this: compliant diabetic patients do (results of extremely unscientific polling) 4-6+ glucose tests a day as a matter of course, and several additional ones in the 1-2 hours following any unexpected excursion (they get physical symptoms that allow them to detect severe excursions).
This serves to give the patient a rough idea of how they are doing at controlling their glucose levels, but they also go to a lab to get a Hemoglobin A1C drawn every quarter (or so). The A1C result is dependent mostly on their average blood glucose.
I've talked to people who clocked in a 80-110 (quite favorable numbers) four times a day for months, and got back an A1C suggesting an average above 150 (not desirable at all). Presumable they were going high in the night. And I've heard similar stories from people who we probably going low---very low---in their sleep.
The lesson is:
Finger prick readings have their place, but don't try to extrapolate them to times not well sampled.
If you want to do just a straight fit of the data to make things easier to view then something like what Charlie Martin recommended would likely work well. However, as noted by dmckee this data really wouldn't mean anything.
What you are trying to do is actually more in line with pharmacokenetics which is an entire scientific study in and of itself. In this case I'm not even sure it would entirely apply except in the case of Type I Diabetes as most of what I know about pharamcokenetics only applies drug studies, but if something is being produced by the body then you are likely looking at entirely different types of analysis. If you are interested in the subject then there are quite a few book previews on Google Books if you do a search for "pharmacokienetics" but due to the nature of the subject they are very math heavy and assume that you have an understanding of chemistry and biology as well.
okay, you're going to be looking for some fitted curve. The thing with that is that for n points there are fit polynomials up to order... n-1 I think. It's been a while. Yep. by golly, I'm right. The common thing when you have lots of points and don't wants a complicated function (which you don't) is to use a least-squares approximation.
probably the best thing is to look for a canned routine you can use; these exist in most stats packages. Give us a little more detail on the environment you want and we might be able to point you more closely to something suitable.
This is most likely not going to work but Artificial Neural Networks may, and i repeat may be able to get something out of a good data set. By good, i mean like weeks or months of continuous recording, and even then i wouldn't trust the data set unless i had very good reason to. I also don't think you'll get predictive data out of it, but it may depend on how you implement it. Overall if you were to do this it would seem to be more of a hobby thing to see if it even even come close, like "oh neat i got a neural network to within X amount of accuracy". Again, i must stress, don't use this in any sort of production situations or anywhere where it could possibly hurt or kill someone!

Resources