Real Time Workaround using windows for fixed sampling time - windows

I am trying to collect data off an accelerometer sensor. I have an Arduino doing the analog to digital conversion of the signal and sending it through a serial port to MATLAB on Windows.
I send a reading every 5ms from the Arduino through the serial port. I am saving that data using MATLAB's serial read in a vector as well as the time at which it was read using the clock method.
If I was to plot the column of the vector where I have saved at which second I read, I get a curve (non-linear), and when I look at the difference between 1 read and another, I see that it is slightly varying.
Is there any way to get the data saved in real time with fixed sampling time?
Note: I am using 250000 baud rate.
Matlab Code:
%%%%% Initialisation %%%%%
clear all
clc
format shortg
cnt = 1;%File name changer
sw = 1;%switch: 0 we add to current vector and 1 to start new vector
%%%%% Initialisation %%%%%
%%%%% Communication %%%%%
arduino=serial('COM7','BaudRate',250000);
fopen(arduino);
%%%%% Communication %%%%%
%%%%% Reading from Serial and Writing to .mat file%%%%%
while true,
if sw == 0,
if (length(Vib(:,1))==1000),% XXXX Samples in XX minutes
filename = sprintf('C:/Directory/%d_VibrationReading.mat',cnt);
save (filename,'Vib');
clear Vib
cnt= cnt+1;
sw = 1;
end
end
scan = fscanf(arduino,'%f');
if isfloat(scan) && length(scan(:,1))==6,% Change length for validation
vib = scan';
if sw == 1,
Vib = [vib clock];
sw = 0;
else
Vib = [Vib;vib clock];
end
end
end
%%%%% Reading from Serial and Writing to .mat file%%%%%
% Close Arduino Serial Port
fclose(arduino);
Image 1 shows the data received through serial (each Row corresponding to 1 serial read)
Image 2 shows that data saved with the clock
Image 1:
Image 2:

I know that my answer does not contain a quick and easy solution. Instead it primarily gives advice how to redesign your system. I worked with real-time systems for several years and saw it done wrong too many time. It might be possible to just "fix", but working with your current communication pattern tweaking the performance but I am convinced you will never receive reliable time information.
I will answer this from a general system design perspective, instead of trying to fix your code. Where I see the problems:
In general, it is a bad idea to append time information on the receiving PC. Whenever the sensor is capable and has a clock, append the time information on the sensor system itself. This allows for an accurate relative timing between the measurements. Some clock adjustment might be necessary when the clock on the sensor is not set properly, but that is just a constant offset.
Switch from ASCII-encoded data to binary data. With your sample rate and baut rate set, you only have 50 bytes for each message.
Write a robust receiver. Just dropping messages you "don't understand" is not a good idea. Whenever the buffer is full, you might receive multiple messages unless you use a proper terminator.
Use preallocation. You know how large the batches you want to write are.
A simple solution for a message:
2 bytes - clock milliseconds
4 bytes - unix timestamp of measurement
For each sensor
2 bytes int32 sensor data
2 bytes - Terminator, constant value. Use a value which is outside the range for all previous integers, e.g. intmax
This message format should theoretically allow you to use 21 sensors. Now to the receiving part:
To get a first version running with a good performance, call fread (serial) with large batches of data (size parameter) and dump all readings into a large cell array. Something like:
C=cell(1000,1)
%seek until you hit a terminator
while not(terminator==fread(arduino,1));
for ix=1:numel(C)
C{ix}=fread(arduino,'int16',1000)
end
fclose(arduino);
Once you read the data append it to a single vector: C=[C{:}]; and try to parse it in post-processing. If you manage the performance you may later return to on-the-fly processing, but I recommend to start this way to get the system established.

Related

Start bit and end bits in Serial Data Transmission Confusion

I am bit confused how start and stop bit are differentiated from the actual data bits. For example say "data" whose binary is 01100100 01100001 01110100 01100001 is being set from System A to System B as a single packet (because it's less than 64 Kibibytes) bit by bit. Please let me know how start bit and stop bits are added to these data bits. There were two related thread on Stacloverflow with only one answer this was not accepted but is very confusing. Can someone explain it in simple terms please. Thank you
When you want to send data over serial line, you need to synchronize transmitter and receiver. The start bit simply marks the beginning of the data chunk (typically one byte with or without parity bit), and the stop bit marks the end of data chunk.
In the beginning, there’s no data being transmitted - let´s say there is ‘0’ on the line for some time. The receiver is waiting for the start bit (both start and stop bits are always ‘1’). When the start bit arrives, it starts an internal timer and on every tick it reads the value from the line, until all data and parity bits are read. Then it waits for the stop bit and then it begins to start waiting for a new start bit.
Without the start bit, the receiver would not now when to start reading data bits. Imagine sending zero byte without parity: The line would just stay in 0 state all the time.
The stop bit is not necessary, it’s there just for enhancing the reliability (both receiver and transmitter must use the same frequency).
So, the start and stop bits don’t need to be distinguished from data bits. Quite the oposite: They allow the receiver to properly identify data bits.
When sending your data, you would take them byte by byte and for each of them you would send the start bit (‘1’) first, then individual bits, then maybe parity bit and then a ‘1’ - the stop bit, everything at a given frequency. Then you would wait at least for one timer tick.
Usually you don’t need to do all of this, because there are specialized chips for this on the board. You just provide your data using a buffer and wait until they’re sent, or you wait for data being received.

GPIO32 pin works in analog mode, always reads 0 in digital mode

I'm having some difficulty getting PCNT pulse counting working with a prototype ESP32 device board.
I have a water level sensor (model D2LS-A) that signals state by the frequency of a square wave signal it sends to GPIO32 (20Hz, 50Hz, 100Hz, 200Hz, 400Hz).
Sadly, the PCNT counter stays at 0.
To troubleshoot, I tried putting GPIO32 in ADC mode (attenuation 0, 10-bit mode) to read the raw signal (sampling it many times a second), and I'm getting values that I would expect (0-1023). But trying the same thing using digital GPIO mode, it always returns 0, never 1 in all the samples.
Since the PCNT ESP IDF component depends on reading the pin digitally, the counter never increments past 0.
So the real problem I'm having is: why aren't the ADC readings (between 0-1023) translating to digital readings of 0-1 as one would expect?

How to prevent SD card from creating write delays during logging?

I've been working on an Arduino (ATMega328p) prototype that has to log data during certain events. An LSM6DS33 sensor is used to generate 6 values (2 bytes each) at a sample rate of 104 Hz. This data needs to be logged for a period of 500-20000ms.
In my code, I generate an interrupt every 1/104 sec using Timer1. When this interrupt occurs, data is read from the sensor, calibrated and then written to an SD card. Normally, this is not an issue. Reading the data from the sensor takes ~3350us, calibrating ~5us and writing ~550us. This means a total cycle takes ~4000us, whereas 9615us is available.
In order to save power, I wish to lower the voltage to 3.3V. According to the atmel datasheet, this also means that the clock frequency should be lowered to 8MHz. Assuming everything will go twice as slow, a measurement cycle would still be possible because ~8000us < 9615us.
After some testing (still 5V#16MHz), however, it occured to me that every now and then, a write cycle would take ~1880us instead of ~550us. I am using the library SdFat to write and test SD cards (RawWrite example). The following results came in when I tested the card:
Start raw write of 100000 KB
Target rate: 100 KB/sec
Target time: 100 seconds
Min block write time: 1244 micros
Max block write time: 12324 micros
Avg block write time: 1247 micros
As seen, the average time to write is fairly consistent, but sometimes a peak duration of 10x average occurs! According to the writer of the library, this is because the SD card needs some erase cycles in between x amount of write cycles. This causes a write delay (src:post#18&#22). This delay, however, pushes the time required for a cycle out of the available 9615us bracket, because the total measure cycle would be 10672us.
The data I am trying to write, is first put into a string using sprintf:
char buf[20] = "";
sprintf(buf,"%li\t%li\t%li\t%li\t%li\t%li",rawData[0],rawData[1],rawData[2],rawData[3],rawData[4],rawData[5]);
myLog.println(buf);
This writes the data to a txt file. But at my speed rate, only 21*104=2184 B/s would suffice. Lowering the speed of the RawWrite example to 6 KB/s, causes the SD card to write without getting an extended write delay. Yet my code still has them, even though less data is written.
My question is: how do I prevent this delay from occurring (if possible)? And if not possible, how can I work around it? It would help if I understood why exactly the delay occurs, because the interval is not always the same (every 10-15 writes).
Some additional info:
The sketch currently uses 69% of RAM (2kB) with variables. Creating two 512 byte buffers - like suggested in the same forum - is not possible for me.
Initially, I used two strings. Merging them into one, didn't affect the write speed with any significance.
I don't know how to work around the delay, but I experience a more stable and faster writing time, if I wrote to a binary file instead of a ".csv" or .txt" file.
The following link provide a fine script to write data as a binary struct to the SD card. (There are some small typo in his example, it is easily fixed)
https://hackingmajenkoblog.wordpress.com/2016/03/25/fast-efficient-data-storage-on-an-arduino/
This will not help you with the time variation, but it might minimize the writing time, and thus negleting the time issue.

MME Audio Output Buffer Size

I am currently playing around with outputting FP32 samples via the old MME API (waveOutXxx functions). The problem I've bumped into is that if I provide a buffer length that does not evenly divide the sample rate, certain audible clicks appear in the audio stream; when recorded, it looks like some of the samples are lost (I'm generating a sine wave for the test). Currently I am using the "magic" value of 2205 samples per buffer for 44100 sample rate.
The question is, does anybody know the reason for these dropouts and if there is some magic formula that provides a way to compute the "proper" buffer size?
Safe alignment of data buffers is the value of nBlockAlign of WAVEFORMATEX structure.
Software must process a multiple of nBlockAlign bytes of data at a
time. Data written to and read from a device must always start at the
beginning of a block. For example, it is illegal to start playback of
PCM data in the middle of a sample (that is, on a non-block-aligned
boundary).
For PCM formats this is the amount of bytes for single sample across all channels. Non-PCM formats have their own alignments, often equal to length of format-specific block, e.g. 20 ms.
Back in time when waveOutXxx was the primary API for audio, carrying over unaligned bytes was an unreasonable burden for the API and unneeded performance overhead. Right now this API is a compatibility layer on top of other audio APIs, and I suppose that unaligned bytes are just stripped to still play the rest of the content, which would otherwise be rejected in full due to this small glitch, which might be just a smaller and non-fatal caller's inaccuracy.
if you fill the audio buffer with sine sample and play it looped , very easily it will click , unless the buffer length is not a multiple of the frequence, as you said ... the audible click in fact is a discontinuity in the wave ...an advanced techinques is to fill the buffer dinamically , that is, you should set a callback notification while the buffer pointer advance and fill the buffer with appropriate data at appropriate offset. i would use a more large buffer as 2205 is too short to get an async notification , calculate data , and write the buffer ,all that while playing , but it would depend of cpu power

GPS Time synchronisation

I'm parsing NMEA GPS data from a device which sends timestamps without milliseconds. As far as I heard, these devices will use a specific trigger point on when they send the sentence with the .000 timestamp - afaik the $ in the GGA sentence.
So I'm parsing the GGA sentence, and take the timestamp when the $ is received (I compensate for any further characters being read in the same operation using the serial port baudrate).
From this information I calculate the offset for correcting the system time, but when I compare the time set to some NTP servers, I will get a constant difference of 250ms - when I correct this manually, I'm within a deviation of 20ms, which is ok for my application.
But of course I'm not sure where this offset comes from, and if it is somehow specific to the GPS mouse I'm using or my system. Am I using the wrong $ character, or does someone know how exactly this should be handled? I know this question is very fuzzy, but any hints on what could cause this offset would be very helpful!
Here is some sample data from my device, with the $ character I will take as the time offset marked:
$GPGSA,A,3,17,12,22,18,09,30,14,,,,,,2.1,1.5,1.6*31
$GPRMC,003538.000,A,5046.8555,N,00606.2913,E,0.00,22.37,160209,,,A*58
-> $ <- GPGGA,003539.000,5046.8549,N,00606.2922,E,1,07,1.5,249.9,M,47.6,M,,0000*5C
$GPGSA,A,3,17,12,22,18,09,30,14,,,,,,2.1,1.5,1.6*31
$GPGSV,3,1,10,09,77,107,17,12,63,243,30,05,51,249,16,14,26,315,20*7E
$GPGSV,3,2,10,30,24,246,25,17,23,045,22,15,15,170,16,22,14,274,24*7E
$GPGSV,3,3,10,04,08,092,22,18,07,243,22*74
$GPRMC,003539.000,A,5046.8549,N,00606.2922,E,0.00,22.37,160209,,,A*56
-> $ <- GPGGA,003540.000,5046.8536,N,00606.2935,E,1,07,1.5,249.0,M,47.6,M,,0000*55
$GPGSA,A,3,17,12,22,18,09,30,14,,,,,,2.1,1.5,1.6*31
$GPRMC,003540.000,A,5046.8536,N,00606.2935,E,0.00,22.37,160209,,,A*56
-> $ <- GPGGA,003541.000,5046.8521,N,00606.2948,E,1,07,1.5,247.8,M,47.6,M,,0000*5E
You have to take into account things that are going on in GPS device:
receive satellite signal and calculates position, velocity and time.
prepare NMEA message and put it into serial port buffer
transmit message
GPS devices have relatively slow CPUs (compared to modern computers), so this latency you are observing is result of processing that device must do between generation of position and moment it begin transmitting data.
Here is one analysis of latency in consumer grade GPS receivers from 2005. There you can find measurement of latency for specific NMEA sentences.

Resources