Extracting (clearing) useful true data from a noisy temperature signal - processing

I have a temperature sensor signal with peak values. Sensor was setted for 0-275 K and 150-900 for two step of measurement but real temperatures were really high. I purposed to measure cooled points with my sensor between the peaks. Thus, temperature curve contains cooled points between the peaks. Many piece of frame after peaks, temperature getting down slowly relative to increasing of peaks and this cooled temperature trend is changing till the last frame.My point is acquiring this cooled temperature points between peaks. I called high temperature peaks as maximum pekas and low temperature peaks as min peaks. I tried findpeaks, localmax commands but I have to come up with a smart algorithm because peak widths are changing firstly at 381000th frame, secondly 445500th frame. I also attached data file with figure version of data file. How can I extract this cooled points along the frame? Are there anyone who can help me??
Thanks.normal figure
zoomed figure to observe max and min peaks and cooled points.
Here the link of .mat and .fig file;
https://easyupload.io/m/ajy5nq
I tried all combinations of findpeaks, locmin or locmax commands.

Related

How can I combine properties of rotational data?

I have a project I'm working on that requires rotational data (yaw, pitch, roll) to be taken from a few different sensors to be combined in code.
The first sensor I have, I can get good angels from, but has a very bad drifting problem.
The second sensor can has very good angles with minimum drifting, but only has a -90 to 90 degree range
of motion.
My question is how can I combine these two sensors data so that I have minimum drifting and a 360° range?
I can provide sample data if needed.
Thanks in advance!

Temperature Regulation Circuit Arduino

I am currently working on a water flow control system using an arduino. The goal is that I can set the temperature of the water stream using the arduino.
I have a Y shaped part of hosing. On the upper left arm of the Y piece I have a constant stream of cold water, around 12°C. On the upper right arm of the Y piece I got a valve with which I can regulate how much hot water I mix in. The hot water is around 50°C. To regulate the hot water intake I am using a servo motor, which cranks the valve to a certain position.
On the lower part of the Y I got a temperature sensor which tells me the mixed temperature of the water.
The algorithm I came up with to mix water to a specific temperature looks like this:
1. Calibrate by calculating the minimum and maximum temp and corresponding servo positions
1.1 set the servo position to the absolute minimum, wait for 10 seconds and get the temperature -> minTemperature, minPos
1.2 set the servo position to the absolute maximum, wait for 10 seconds and get the temperature -> maxTemperature, maxPos
2. Set the mixing temperature to X°C
2.1 (maxTemp-minTemp)/(maxPos-minPos) = p °C/pos
Which means that changing the position by 1 position changes the mix temperature by p °C
2.2 minPos + (x-minTemp) / p = targetPos
3. If abs(measuredTemp-x)>Tolerance than do 2.
Is this approach viable at all, when it comes to real life implementation? How are other heat regulation circuits done?
This will basically work, but there are a couple problems you should fix:
The relationship between servo position and temperature is not going to be linear. At minimum, calibrate at at 4 different positions and fit a cubic polynomial.
Because a valve has a lot of friction, and the positioning algorithms in off-the-shelf servos are not awesome, the position it goes to when you command a move to position 'X' from a lower position is not the same as the position it goes to when you command a move to the same position 'X' from a higher position. You should calibrate different curves for increasing temperature and decreasing temperature, and make sure you command motion that approaches a desired temperature slowly in order to get repeatable results.
Once you get to correct position according to the calibrated curves, if you temperature is off you can move slowly toward it and adjust the calibration. You probably want to assume that the error comes from a difference in the input temperature and adjust accordingly.

Gait/Walk analysis from sensor data

i've assembled a carpet with 8 pressure sensors inside. You can see the sensors arrangement in the picture. The entire carpet is 80x80 cm. Each sensor ouptuts a digital signal (0 or 1) when is pressed. The microcontroller read all the sensors every 100 ms, and outputs a payload byte, where each bit contains the information of a single triangle. I'm storing all theese bytes in a 100 bytes long array. I need to calculate from this array the gait (the direction, the angle where the user is heading). The user is simply marching remaining on his spot, the feet are raised and lowered alternately. Do you know any algorithm wich i could be used to do this kind of analysis? Should i use machine learning / neural networks? Language doesn't matter, i just need to figure out the right way to analyse this byte array. Thanks!
sensors inside the carpet

Using a particle filter with multiple sensors with different sampling rates

Current situation:
I have implemented a particle filter for an indoor localisation system. It uses fingerprints of the magnetic field. The implementation of the particle filter is pretty straight forward:
I create all particles uniformly distributed over the entire area
Each particle gets a velocity (gaussian distributed with the mean of a 'normal' walk speed) and a direction (uniformly distributed in all directions)
Change velocity and direction (both gaussian distributed)
Move all particles in the given direction by velocity multiplied by the time difference of the last and the current measurement
Find the closest fingerprint of each particle
Calculate the new weight of each particle by comparing the closest fingerprint and the given measurement
Normalize
Resample
Repeat #3 to #9 for every measurement
The problem:
Now I would like to do basically the same but add another sensor to the system (namely WiFi measurements). If the measurements would appear at the same time there wouldn't be a problem. Then I would just calculate the probability for the first sensor and multiply this by the probability of the second sensor to get my weight for the particle at #6.
But the magnetic field sensor has a very high sample rate (about 100 Hz) and the WiFi measurement appears roughly every second.
I don't know what would be the best way to handle the problem.
Possible solutions:
I could throw away (or average) all the magnetic field measurements till a WiFi measurement appears and use the last magnetic field measurement (or the average) and the WiFi signal together. So basically I reduce the sample rate of the magentic field sensor to the rate of the WiFi sensor
For every magnetic field measurement I use the last seen WiFi measurement
I use the sensors separated. That means if I get a measurement of one sensor I do all the steps #3 to #9 without using any measurement data of the other sensor
Any other solution I haven't thought about ;)
I'm not sure which would be the best solution. All the solutions dont seem to be good.
With #1 I would say I'm loosing information. Although I'm not sure if it makes sense to use a sample rate of about 100 Hz for a particle filter.
At #2 I have to assume that the WiFi signal doesn't chance quickly which I can't prove.
If I use the sensors separately the magnetic field measurements become more important than the WiFi measurements since all the steps will have happened 100 times with the magnetic data till one WiFi measurement appears.
Do you know a good paper which is dealing with this problem?
Is there already a standard solution how to handle multiple sensors with different sample sizes in a particle filter?
Does a sample size of 100 Hz make sense? Or what would be a proper time difference for one step of the particle filter?
Thank you very much for any kind of hint or solution :)
In #2 instead of using sample-and-hold you could delay the filter by 1s and interpolate between WiFi-measurements in order to up-sample so you have both signals at 100Hz.
If you know more about the WiFi behavior you could use something more advanced than linear interpolation to model the Wifi behavior between updates. These folks use a more advanced asynchronous hold to up-sample the slower sensor signal but something like a Kalman filter might also work.
With regards to update speed I think 100Hz sounds high for your application (assuming you are doing positioning of a human walking indoors) since you are likely to take a lot of noise into account, lowering the sampling frequency is a cheap way to filter out high-frequency noise.

Linearly Normalizing Stack of Images (data?) Prior to Averaging?

I'm writing an application that averages/combines/stacks a series of exposures. This is commonly used to reduce noise in the resultant image.
However, it seems, to optimize the average/stack the exposures are usually first normalized. It seems that this process assigns weights to each of the exposures and then proceeds to combine them. I am guessing that the process computes the overall intensity of each image as the purpose is to match the intensities of all the images in the stack.
My question is, how can I incorporate an algorithm that will allow me to normalize a series of images? I guess the question be generalized by instead asking "How can I normalize a series of readings?"
An outline in my head appears as follows:
Compute the average of a reference image.
Divide the average of each frame by the average of the the reference frame.
The result of each division is the weight for each frame.
Scale/Multiply each pixel in a frame by the weight found for that particular frame.
Does this seem to make sense to anyone? I have tried to google for the past hour but didn't found anything. Also took at the indices of various image processing books on Amazon but that didn't turn up anything either.
Each integration consists of signal and assorted noise - some is time-independent (e.g. bias or CCD readout noise), some time-dependent (e.g dark current), and some is random (shot noise). The aim is to remove the noise, and leave the signal. So you would first subtract the 'fixed' sources using dark frames (which will include dark current, readout and bias) leaving signal plus shot noise. Signal scales as flux times exposure time, shot noise as the square root of the signal
http://en.wikipedia.org/wiki/Shot_noise
so overall your signal/noise scales as the square root of the integration time (assuming your integrations are short enough that they are not saturated). So by adding frames you are simply increasing the exposure time, and hence the signal/noise ratio. You don't need to normalize first.
To complicate matters, transient non-Gaussian noise is also present (e.g. cosmic ray hits). There are many techniques for dealing with these, but a common one is 'sigma-clipping', where you have an extra pass to calculate the mean and standard deviation of each pixel, and then reject outliers that are many standard deviations from the mean. Real signal will show Gaussian fluctuations around the mean value, whereas transients will show a large deviation in one frame of the stack. Maybe that's what you are thinking of?

Resources