Firstly, I am a newby guy on PID control or any other control techniques. After I reviewed documentaries and C source files on the net. There is an issue on my mind.
PID blocks in some documantaries shows that PID output feeds the PLANT directly.
In my application, my PID' input is Vout, and I control the PWM frequencies of mosfets.
So, If I feed mosfets directly with the PID output ( assume that I limit also), do I do it right ?
Because I find this wrong.
According to me PID output is a value that varies between negative and positive edges.
This value should be added to the actual frequency value ( this is raw value that will be written into PWM registers)and then, a limiter should be applied for this sum.
If my approach is okay then, how specify the limiter for PID output.
Please do not judge me If I say bullshit things.
You can try both and see which gives better results. If you are adding an offset, you are likely to have a low or zero P term.
Your limits are entirely based on the application. For example, small electronics need very short, rapid pulses. Something like a heating element will need longer pulses. You should find the voltage range you want, and then experimentally find limits that prevent you from going outside this range with a voltmeter once your PID is tuned.
Related
I am trying to understand the add_disk_randomness function from the linux kernel. I read a few papers, but they don't describe it very well. This code is from /drivers/char/random.c:
add_timer_randomness(disk->random, 0x100 + disk_devt(disk));
disk_devt(disk) holds the variable disk->devt. I understand it as a unique block device number. But what are the major and minor device number?
Then the block device number and the hex 0x100 are added together.
Then we collect also the time value disk->random. Is this the seek time for each block?
These two values will be passed to the function add_timer_randomness. It would be nice to get an example with values.
The first parameter of add_timer_randomness is a pointer to struct timer_rand_state. You can confirm this by checking struct gendisk.
timer_rand_state from random.c is reproduced below
/* There is one of these per entropy source */
struct timer_rand_state {
cycles_t last_time;
long last_delta, last_delta2;
};
This struct stores the timestamp of the last input event as well as previous "deltas". add_timer_randomness first gets the current time (measured in jiffies), then reads last_time (also in jiffies), then overwrites last_time with the first value.
The first, second, and third order "deltas" are tracked as a means of estimating entropy. The main source of entropy from hard disk events is the timing of those events. More data is hashed into the entropy pool, but they don't contribute to entropy estimates. (It is important not to over estimate how unpredictable the data you hash in is. Otherwise your entropy pool, and therefore your RNG output too, may be predictable. Underestimating entropy on the other hand cannot make RNG output more predictable. It is always better to use a pessimistic estimator in this respect. That is why data that doesn't contribute to entropy estimates are still hashed into the entropy pool.)
Delta is the time between two events. (The difference between timestamps.) The second order delta is the difference between the times between two events. (Difference between deltas.) Third order deltas is differences between second order deltas. The timer_rand_state pointer is the memory location that tracks the previous timestamp and deltas. delta3 does not need to be stored.
The entropy estimate from this timing data is based on the logarithm of the largest absolute value of deltas one, two, and three. (Not exactly the logarithm. It's always an integer, for example. It's always rounded down by one bit. And if the value you're taking the almost-logarithm of is zero the result is also zero.)
Say you have a device used as an entropy source that generates a new events every 50 milliseconds. The delta will always be 50ms. The second order delta is always zero. Since one of the three deltas is zero this prevents this device's timings from being relied on as a significant entropy source. The entropy estimator successfully fails to overestimate input entropy, so even if this device is used as an entropy source it won't "poison" the entropy pool with predictability.
The entropy estimate isn't based on any formal mathematics. We can't construct an accurate model of the entropy source because we don't know what it is. We don't know what the hardware on a user's computer will be exactly or exactly how it will behave in an unknown environment. We just want to know that if we add one to the (estimated) entropy counter then we've hashed at least one bit of entropy worth of unpredictable data into the entropy pool. Extra data besides just the timings is hashed into the pool without increasing the entropy counter, so we hope that if the timer-based entropy estimator some time over estimates then maybe there is some unpredictability in the non-timer-based source we didn't account for. (And if that's the case your RNG is still safe.)
I'm sure that sounds unconvincing, but I don't know how to help that. I tried my best to explain the relevant parts of the random.c code. Even if I could mind meld and provide some intuition for how the process works it probably would still be unsatisfying.
I writing an application in AVR Studio 4 which generates random numbers and outputs them on a seven segment display. At the moment i am using a seed, the seed value then gets randomized and the value output. This method obviously produces the same random number sequence (and displays the same sequence) every time the program is run. Is there an alternate method i can use which does not use a seed and as such does not start the program with the same number each time, allowing for different random numbers.
Thanks
Each time the microcontroller starts up it is seeing exactly the same internal state as any other time it starts up. This means its output will always be the same regardless of any algorithm you might use.
The only way to get it to produce different behaviour is to somehow modify its state at startup by introducing some external information or by storing state between startups. Some ideas for how to do the first option might be to measure the duration of a user key press (if your system has buttons) or sensing a temperature or other external input and using this to seed the algorithm. However the simplest option is probably to just store a counter in EEPROM that is incremented after each startup and use this to generate the seed.
I have a long time series with some repeating and similar looking signals in it (not entirely periodical). The length of the time series is about 60000 samples. To identify the signals, I take out one of them, having a length of around 1000 samples and move it along my timeseries data sample by sample, and compute cross-correlation coefficient (in Matlab: corrcoef). If this value is above some threshold, then there is a match.
But this is excruciatingly slow (using 'for loop' to move the window).
Is there a way to speed this up, or maybe there is already some mechanism in Matlab for this ?
Many thanks
Edited: added information, regarding using 'xcorr' instead:
If I use 'xcorr', or at least the way I have used it, I get the wrong picture. Looking at the data (first plot), there are two types of repeating signals. One marked by red rectangles, whereas the other and having much larger amplitudes (this is coherent noise) is marked by a black rectangle. I am interested in the first type. Second plot shows the signal I am looking for, blown up.
If I use 'xcorr', I get the third plot. As you see, 'xcorr' gives me the wrong signal (there is in fact high cross correlation between my signal and coherent noise).
But using "'corrcoef' and moving the window, I get the last plot which is the correct one.
There maybe a problem of normalization when using 'xcorr', but I don't know.
I can think of two ways to speed things up.
1) make your template 1024 elements long. Suddenly, correlation can be done using FFT, which is significantly faster than DFT or element-by-element multiplication for every position.
2) Ask yourself what it is about your template shape that you really care about. Do you really need the very high frequencies, or are you really after lower frequencies? If you could re-sample your template and signal so it no longer contains any frequencies you don't care about, it will make the processing very significantly faster. Steps to take would include
determine the highest frequency you care about
filter your data so higher frequencies are blocked
resample the resulting data at a lower sampling frequency
Now combine that with a template whose size is a power of 2
You might find this link interesting reading.
Let us know if any of the above helps!
Your problem seems like a textbook example of cross-correlation. Therefore, there's no good reason using any solution other than xcorr. A few technical comments:
xcorr assumes that the mean was removed from the two cross-correlated signals. Furthermore, by default it does not scale the signals' standard deviations. Both of these issues can be solved by z-scoring your two signals: c=xcorr(zscore(longSig,1),zscore(shortSig,1)); c=c/n; where n is the length of the shorter signal should produce results equivalent with your sliding window method.
xcorr's output is ordered according to lags, which can obtained as in a second output argument ([c,lags]=xcorr(..). Always plot xcorr results by plot(lags,c). I recommend trying a synthetic signal to verify that you understand how to interpret this chart.
xcorr's implementation already uses Discere Fourier Transform, so unless you have unusual conditions it will be a waste of time to code a frequency-domain cross-correlation again.
Finally, a comment about terminology: Correlating corresponding time points between two signals is plain correlation. That's what corrcoef does (it name stands for correlation coefficient, no 'cross-correlation' there). Cross-correlation is the result of shifting one of the signals and calculating the correlation coefficient for each lag.
Quick Summary:
I'm looking for an algorithm to display a four-digit speed signal in such a way that the minimum number of (decimal) digits are changed each time the display is updated.
For example:
Filtered
Signal Display
--------------------
0000 0000
2345 2000
2345 2300
2345 2340
0190 0340
0190 0190
0190 0190
Details:
I'm working on a project in which I need to display a speed signal (between 0 and 3000 RPM) on a four-digit LCD display. The ideal display solution would be an analog gauge, but I'm stuck with the digital display. The display will be read by a machine operator, and I would like it to be as pleasant to read as possible.
The operator doesn't really care about the exact value of the signal. He will want to know what the value is (to the nearest 10 RPM), and he will want to see it go up and down in response to changes in the operation of the machine. He will not want to see it jumping all over the place.
Here is what I have done so far:
Round the number to the nearest 10 RPM so that the last digit always reads 0
Filter the signal so that electrical noise and normal sensor fluctuations don't cause the reading to jump around more than 10 RPM at a time.
Added a +/-10 RPM hysteresis to the signal to avoid the cases where it would wobble over the same value (for example: 990 - 1000)
This has cleaned things up nicely when the signal is steady (about 75% of the time), but I still see a lot of unnecessary variation in the signal when it is moving from one steady state to another. As the signal changes from 100 RPM to 1000 RPM (for example), it passes through a lot of numbers along the way. Since it takes a moment to actually read and comprehend the number, there seems to be little point in hitting all of those intermediate states. I tried simply reducing the update rate of the display, but that did not produce satisfactory results. It made the display "feel" sluggish and jumpy, all at the same time. There would be a noticeable delay before the numbers would change, and then they would move in big leaps (100, 340, 620, 980, 1000).
Proposal:
I would like the display to behave as shown in the example:
The display is updated twice per second
A transition from one steady state to another should not take longer than 2 seconds.
If the input signal is higher than the currently displayed value, the displayed signal should increase, but it should never go higher than the input signal value.
If the input signal is lower than the currently displayed value, the displayed signal should decrease, but it should never go lower than the input signal value.
The minimum number of digits should be changed per update (preferably only one digit)
Higher-order digits should be changed first, so that the difference between the display signal and the input signal is reduced as quickly as possible
Can you come up with, or do you know of an algorithm which will output the "proper" 4-digit decimal number according to the above rules?
The function prototype, in pseudo-code, would look something like this:
int GetDisplayValue(int currentDisplayValue, int inputSignal)
{
//..
}
Sorry for the wall of text. I wanted to document my progress to date so that anyone answering the question would avoid covering ground that I've already been through.
If you do not need the data expressed by the 4th digit, and are strictly bound to a 4 digit display, have you considered using the 4th digit as an increase/decrease indicator? Flash some portion of the top or bottom of the zero at 2Hz* to indicate that the next change of the gauge will be an increase or decrease.
I think you could also do well to make a good model of the response of your device, whatever it is, to adjustments, and use that model to extrapolate the target number based on the first half second of the two second stabilization process.
*this assumes that you have the two updates per second you posited. Most 4 digit displays are multiplexed, so you could probably flash it at a much higher frequency with a little driver tweaking.
I think your proposal to change one digit at a time is strange, because it actually provides the user with misinformation... what I would consider would be actually to add much MORE state changes, and implement it so that whenever the signal changes, they gauge moves towards the new value in increments of one. This would provide an analog-gauge like experience and "animation" of the change; the operator would very soon recognize subconsciously that digits rotating in sequence 0,1,2... denote increasing speed and 9,8,7,... decreasing speed.
E.g.:
Filtered signal Display
0000 0000
2345 0001
0002
...
2345
Hysteresis, which you have implemented, is of course very good for the stable state.
This is a delicate question, and my answer does not cover the algorithmic aspect.
I believe that the behaviour that you represent in the table at the beginning of your posting is a very bad idea. Lines 2 and 5 display data points that are and never were in the data, i.e. wrong data, for the sake of user experience. This could be a poor choice in the domain of machine operation.
A lower update rate may "feel sluggish" but is well defined (only "real" data and at most n milliseconds old). A faster update rate will display many intermediate values, but the most significant digits shouldn't change to quickly. Both are easier to test than any pretty false value generation.
This will incorporate more or less slowly the sensor value into the displayed value:
display = ( K * sensor + (256 - K) * display ) >> 8
Choose K between 0 (display never updated) and 256 (display always equal to sensor).
Is there an algorithm or some heuristic to decide whether digital audio data is clipping?
The simple answer is that if any sample has the maximum or minimum value (-32768 and +32767 respectively for 16 bit samples), you can consider it clipping. This isn't stricly true, since that value may actually be the correct value, but there is no way to tell whether +32767 really should have been +33000.
For a more complicated answer: There is such a thing as sample counting clipping detectors that require x consecutive samples to be at the max/min value for them to be considered clipping (where x may be as high as 7). The theory here is that clipping in just a few samples is not audible.
That said, there is audio equipment that clips quite audible even at values below the maximum (and above the minimum). Typical advice is to master music to peak at -0.3 dB instead of 0.0 dB for this reason. You might want to consider any sample above that level to be clipping. It all depends on what you need it for.
If you ever receive values at the maximum or minimum, then you are, by definition, clipping. Those values represent their particular value as well as all values beyond, and so they are best used as outside bounds detectors.
-Adam
For digital audio data, the term "clipping" doesn't really carry a lot of meaning other than "max amplitude". In the analog world, audio data comes from some hardware which usually contains a "clipping register", which allows you the possibility of a maximum amplitude that isn't clipped.
What might be better suited to digital audio is to set some threshold based on the limitations of your output D/A. If you're doing VOIP, then choose some threshold typical of handsets or cell phones, and call it "clipping" if your digital audio gets above that. If you're outputting to high-end home theater systems, then you probably won't have any "clipping".
I just noticed that there even are some nice implementations.
For example in Audacity:
Analyze → Find Clipping…
What Adam said. You could also add some logic to detect maximum amplitude values over a period of time and only flag those, but the essence is to determine if/when the signal hits the maximum amplitude.