I am using Arduino controller with MATLAB to build a system that reads in sensor data, which triggers a live feed camera to capture an image for real time image processing. Based on these results further processing is done "after some required delay" through controller. Now the problem I am facing is I cannot read sensor data until the last step of previous stage is completed. I want to be able to read sensor data continuosly and process images even while the last processing stage based on previous image is still executing (because of the delay provided). Thank you for any advice/help. Sorry if my english is bad I hope I'm clear.
You are asking about executing a Matlab code in asynchronous mode. As this maybe hard to control using parallel processes generated by matlabpool (e.g. parfor), you may consider writing your code as MPI. Through MPI you can assign the code lines to be run on each slave process while merging/timing the results can be controlled by the master process.
Related
I am part of tractor pulling team and we have Bechoff CX8190 based PLC for data logging. System works most of the time but every now and then saving sensor values (every 10ms is collected) to CSV fails (mostly in middle of csv row). Guy who build the code is new with the TwinCAT and does not know how to find what causes that. Any Ideas where to look reason for this.
Writing to a file is always a asynchron action in TwinCAT. That is to say this is no realtime action and it is not safe that the writing process is done within the task cycletime of 10ms. Therefore these functionblocks always have a BUSY-output which has to be evaluated and the functionblock has to be called successivly until the BUSY-output returns to FALSE. Only then a new write command can be executed.
I normally tackle this task with a two-side-buffer algorithm. Lets say the buffer-array has 2x100 entries. So fill up the first 100 entries with sample values. Then write them all together with one command to the file. When its done, clean the buffer. In the meanwhile the other half of the buffer can be filled with sample values. If second side is full, write them all together to the file ... and so on. So you have more time for the filesystem access (in the example above 100x10ms=1s) as the 10ms task cycletime.
But this is just a suggestion out of my experience. I agree with the others, some code could really help.
I am doing some research and need to collect all the kernel function calls within a certain time span, e.g. 60 seconds. I am using Raspberry Pi 4B.
I've tried to use the function tracer ftrace and read the trace_pipe via
echo function > current_tracer
echo 1 > tracing_on
cat trace_pipe > /home/pi/trace/test.txt
This method seems to be too slow and too much data gets lost due to overfilled buffer: approx. 50-60M data points get lost and I only get about 3 M data points. So that's not a good statistics.
I also tried to use trace-cmd:
trace-cmd record -p function sleep 60
With trace-cmd about 20 M data points get lost, which is much better, but still not good enough to build a good statistics. Furthermore the file I get by doing
trace-cmd report > /home/pi/trace/test_trace-cmd.txt
is about 5-6 Gb and takes a few minutes to write. I don't have an intention to make this file smaller (I assume it is impossible). But I just can't wait for so long.
I also worry about producing too much overhead to the system by saving such big trace files. Is it the case?
I am wondering, if it would possible to direct the output of the trace_pipe (or maybe of some other tracing file) to some I/O pin, so that I can connect some logic analyser to this pin and read the data flow by some other device? There will be no need to save the tracing file on the raspberry itself then. I also hope I can reduce the amount of data getting lost.
I'm trying to read data from the Yost Labs 3-Space Sensor Nano into LabVIEW via an NI MyRIO (1900). I was able to set up a sequence that communicates with the sensor through SPI. However, every time I run the program, it just spits out a single value of 255.
I think understand that I need to include something that allows all the bytes to be read. I just don't know how to go about it.
As an example, I'm trying to read the gyros (0x26) which have a return length of 12 and is a vector (float x3).
Here is my labview code
and here is the manual for the sensor. The commands I'm using are on pages 29-33. In the image, 0x2B is 'read temperature'.
Any help would be greatly appreciated! Thanks :)
Edit: i had messed up the wiring so now the output jumps between ~35 to 255. I'm still having trouble getting all 3 gyro values from the SPI read.
Quoting from Joe Friedrichsen in his comment:
The express block that resets the sensor is not guaranteed to precede the loop because there is no data flow between them. The LabVIEW runtime can see two independent and parallel groups and may choose to execute them simultaneously (which on the wire might mean reset comes between loop commands) or in "reverse" order. Add a wire from reset block to create a terminal on the loop.
Here's a picture of the fix.
You may wish to consider stringing the error wire through your program and wiring it to the stop terminal of the While Loop. Currently, your loop will keep running even if there's a fault in your hardware. Using the error wire would eliminate the need for the flat sequence structure.
I'm writing a simple data logging app that collects accel, gyroscope and write to csv.
It seems that when the watch is on the dock for charging, it works fine. When it's removed from the dock and runs on battery, then it stops sending Sensor Event to onSensorChanged.
Meaning of the graph: the number of samples per second. It should be close to 100 since 100Hz is the sampling rate. The straight line at the beginning aligns well to the time the watch is on a dock.
My code did acquire a WakeLock from PowerManager. What else could cause this problem?
P/S: Data collection is implemented in WearableListenerService. Sensor Events from onSensorChanged are pushed to a double buffer. There's a single write data thread, that keep checking the buffers; and pop the buffer and write to csv.
Problem:
Hi everyone, I am currently building an automation suite using Ruby-Selenium Webdriver-Cucumber to load data into the application using it's GUI. I've take input from mainframe .txt files. The scenarios are like to create a customer and then load multiple accounts for them as per the data provided in the inputs.
Current Approach
Execute the scenario using the rake task by passing line number as parameter and the script is executed for only one set of data.
To read the data for a particular line, I'm using below code:
File.readlines("#{file_path}")[line_number.to_i - 1]
My purpose of using line by line loading is to keep the execution running even if a line fails to load.
Shortcomings
Supposed I've to load 10 accounts to a single customer. So my current script will run 10 times to load each account. I want something that can load the accounts in a single go.
What I am looking for
To overcome the above shortcoming, I want to capture the entire data for a single customer from the file like accounts etc and load them into the application in a single execution.
Also, I've to keep track on the execution time and memory allocation as well.
Please provide your thoughts on this approach and any suggestions or improvements are welcomed. (Sorry for the long post)
The first thing I'd do is break this down into steps -- as you said in your comment, but more formally here:
Get the data to apply to all records. Put up a page with the
necessary information (or support command line specification if not
too much?).
For each line in the file, do the following (automated):
Get the web page for inputting its data;
Fill in the fields;
Submit the form
Given this, I'd say the 'for each line' instruction should definitely be reading a line at a time from the file using File.foreach or similar.
Is there anything beyond this that needs to be taken into account?