Sensor stop collecting when the watch is not charged - wear-os

I'm writing a simple data logging app that collects accel, gyroscope and write to csv.
It seems that when the watch is on the dock for charging, it works fine. When it's removed from the dock and runs on battery, then it stops sending Sensor Event to onSensorChanged.
Meaning of the graph: the number of samples per second. It should be close to 100 since 100Hz is the sampling rate. The straight line at the beginning aligns well to the time the watch is on a dock.
My code did acquire a WakeLock from PowerManager. What else could cause this problem?
P/S: Data collection is implemented in WearableListenerService. Sensor Events from onSensorChanged are pushed to a double buffer. There's a single write data thread, that keep checking the buffers; and pop the buffer and write to csv.

Related

TwinCAT fails to save data to CSV

I am part of tractor pulling team and we have Bechoff CX8190 based PLC for data logging. System works most of the time but every now and then saving sensor values (every 10ms is collected) to CSV fails (mostly in middle of csv row). Guy who build the code is new with the TwinCAT and does not know how to find what causes that. Any Ideas where to look reason for this.
Writing to a file is always a asynchron action in TwinCAT. That is to say this is no realtime action and it is not safe that the writing process is done within the task cycletime of 10ms. Therefore these functionblocks always have a BUSY-output which has to be evaluated and the functionblock has to be called successivly until the BUSY-output returns to FALSE. Only then a new write command can be executed.
I normally tackle this task with a two-side-buffer algorithm. Lets say the buffer-array has 2x100 entries. So fill up the first 100 entries with sample values. Then write them all together with one command to the file. When its done, clean the buffer. In the meanwhile the other half of the buffer can be filled with sample values. If second side is full, write them all together to the file ... and so on. So you have more time for the filesystem access (in the example above 100x10ms=1s) as the 10ms task cycletime.
But this is just a suggestion out of my experience. I agree with the others, some code could really help.

ftrace: output through GPIO

I am doing some research and need to collect all the kernel function calls within a certain time span, e.g. 60 seconds. I am using Raspberry Pi 4B.
I've tried to use the function tracer ftrace and read the trace_pipe via
echo function > current_tracer
echo 1 > tracing_on
cat trace_pipe > /home/pi/trace/test.txt
This method seems to be too slow and too much data gets lost due to overfilled buffer: approx. 50-60M data points get lost and I only get about 3 M data points. So that's not a good statistics.
I also tried to use trace-cmd:
trace-cmd record -p function sleep 60
With trace-cmd about 20 M data points get lost, which is much better, but still not good enough to build a good statistics. Furthermore the file I get by doing
trace-cmd report > /home/pi/trace/test_trace-cmd.txt
is about 5-6 Gb and takes a few minutes to write. I don't have an intention to make this file smaller (I assume it is impossible). But I just can't wait for so long.
I also worry about producing too much overhead to the system by saving such big trace files. Is it the case?
I am wondering, if it would possible to direct the output of the trace_pipe (or maybe of some other tracing file) to some I/O pin, so that I can connect some logic analyser to this pin and read the data flow by some other device? There will be no need to save the tracing file on the raspberry itself then. I also hope I can reduce the amount of data getting lost.

How to process sensor data in LabVIEW? Every value is 255

I'm trying to read data from the Yost Labs 3-Space Sensor Nano into LabVIEW via an NI MyRIO (1900). I was able to set up a sequence that communicates with the sensor through SPI. However, every time I run the program, it just spits out a single value of 255.
I think understand that I need to include something that allows all the bytes to be read. I just don't know how to go about it.
As an example, I'm trying to read the gyros (0x26) which have a return length of 12 and is a vector (float x3).
Here is my labview code
and here is the manual for the sensor. The commands I'm using are on pages 29-33. In the image, 0x2B is 'read temperature'.
Any help would be greatly appreciated! Thanks :)
Edit: i had messed up the wiring so now the output jumps between ~35 to 255. I'm still having trouble getting all 3 gyro values from the SPI read.
Quoting from Joe Friedrichsen in his comment:
The express block that resets the sensor is not guaranteed to precede the loop because there is no data flow between them. The LabVIEW runtime can see two independent and parallel groups and may choose to execute them simultaneously (which on the wire might mean reset comes between loop commands) or in "reverse" order. Add a wire from reset block to create a terminal on the loop.
Here's a picture of the fix.
You may wish to consider stringing the error wire through your program and wiring it to the stop terminal of the While Loop. Currently, your loop will keep running even if there's a fault in your hardware. Using the error wire would eliminate the need for the flat sequence structure.

Real time image processing timing

I am using Arduino controller with MATLAB to build a system that reads in sensor data, which triggers a live feed camera to capture an image for real time image processing. Based on these results further processing is done "after some required delay" through controller. Now the problem I am facing is I cannot read sensor data until the last step of previous stage is completed. I want to be able to read sensor data continuosly and process images even while the last processing stage based on previous image is still executing (because of the delay provided). Thank you for any advice/help. Sorry if my english is bad I hope I'm clear.
You are asking about executing a Matlab code in asynchronous mode. As this maybe hard to control using parallel processes generated by matlabpool (e.g. parfor), you may consider writing your code as MPI. Through MPI you can assign the code lines to be run on each slave process while merging/timing the results can be controlled by the master process.

I/O completion port silently fails to read completely

I'm developing a program that needs to write a large amout of data to disk then read back much smaller amount of data back later on. It needs to "bin" related data together then once it figures out what to do with it, then it can process the data further. It's basically acting like a database, but with temp files on disk. Portions of the temp files get reused fairly frequently as I don't care about the data on disk after I read it back out, so that portion of the file can be recycled. I'm using I/O completion ports to implement this because sequential I/O is simply too slow.
The problem is that sometimes when I read the data, I don't get all of it back. For example, I will zero out my read buffer, do a read operation of, say, 20 bytes, and when the corresponding completion event triggers, some or even none of my read buffer will match what should be on disk, but all of it won't be zeroed out. Occasionally, I can detect this and try sleeping 5 seconds and reading the same portion again, and it matches what I read in the first try. This is taking place on a top of the line SSD, so 5 seconds should be plenty to flush to disk. However, when I stop my application and look at the contents of the file, it's correct on disk. It's as if the previous write hasn't flushed to disk and it tried reading old data.
To test that theory, I tried writing 0xFF on entire sections as I read them. When this error happened again, my read buffer did not contain 0xFFs as I would have expected. So presumably, I'm not reading old data.
I also checked to make sure that the number of bytes returned from the completion event matched the number of bytes that I passed to ReadFile, and they do match. There is no error returned by the completion event or by ReadFile (other than ERROR_IO_PENDING). I am creating my temp files with FILE_ATTRIBUTE_NORMAL, FILE_FLAG_OVERLAPPED, and FILE_FLAG_RANDOM_ACCESS.
I also tried waiting for all pending writes for a given portion of the file to complete before trying to read, but to no avail. I would hope that Windows would do that for me, but it isn't covered in any documentation that I've read.
I'm really at a loss as to why I'm getting what look to be partial or corrupted reads. I'm really just looking for some ideas that might cause this behavior because I'm all out.
From the sound of things you're firing off writes and reads to the same portions of the same file and sometimes the data that the read returns isn't what you think you have previously written.
I assume you are waiting for the write completion for a piece of data before issuing a read request for the same area of the file? If not the read could be occurring before the write completes? When lots of data is being written to the same disk the write completions may begin to slow down and writes may spend more time pending (watch out for the resources that this consumes!)
Personally I'd include my own memory cache layer which knows about the data block until the write completion occurs - you can then satisfy reads for this part of the file from your cache if the write has not yet completed.

Resources