How do I make two servo's move in opposite directions? - arduino-uno

I am trying to make a model of the muscle system in the arm for a project with Arduino, but to accomplish this I need bicep and triceps to move in opposite direction.
I am currently experimenting with a potentiometer and trying to make the two servos move in opposite directions, but somehow the code doesn't seem to work as I would expect since they keep moving in the same direction.
My power supply is my laptop, I haven't used a battery pack yet. As for the specific issue, the servos aren't responding to the potentiometer and they just jitter
#include <Servo.h>
Servo Bicep;
Servo Tricep;
Servo Extensor;
Servo Flexor;
int pos = 0;
int biceppin = 3;
const int triceppin = 4;
const int extensorpin = 5;
const int flexorpin = 6;
int potpin = 8;
int potval = 0;
int potval2;
void setup() {
Bicep.attach(biceppin);
Tricep.attach(triceppin);
Extensor.attach(extensorpin);
Flexor.attach(flexorpin);
}
void loop() {
potval = analogRead(potpin);
potval = map(potval, 0, 1023, 0, 180);
potval2 = 180 - potval;
Bicep.write(potval);
Tricep.write(potval2);
delay(15);
}
Would you be able to tell me what is wrong with the code?
Is there a more efficient way to do the same task?

You set potpin = 8, but analogRead() works only over analog inputs A0-A5 and on most boards, included the tagged Arduino Uno board, pin 8 is a digital pin.
relevant quote:
[...] you cannot use analogRead() to read a digital pin. A digital pin cannot behave as analog because it isn't connected to the ADC (Analog to Digital Converter).
You can test this with the example of https://www.arduino.cc/en/Reference/AnalogRead

I think that there are more than one problem there.
Assuming the program is correct (which may not be the case, since as explained in #BOC's answer you are using pin 8 as an analog input while in an Arduino Uno board, that pin is only digital), if the servos jitter, a good candidate as the source of the problem is your power supply:
You stated (on a comment, but I edited your question to specify this, since it's important) that your power supply is your laptop. Most USB ports are limited to 500 mA.
Although you did not explain anything about your particular servos and their specifications, most of them use at least 250 mA (for example, a Futaba S3003). But low cost/low quality servos usually take more than that, as well as bigger servos. But even if your servos are using 250 mA each one (best case scenario), you are also powering your Arduino Uno board itself (who's linear regulator is not very efficient), and thus you are reaching the limit of your USB port. In my own experience, most (non-mini) servos do take way more than just 250 mA.
Possible solutions
As a fast test, if you lack a good power source, I would do the following, which is in general a good practice when you are mixing code and electronics in a project:
Connect only one servo (let's say bicep), plus the potentiometer.
Run your program and test the servo's movements.
If it works, disconnect that servo and connect the other one.
Run and test, specially direction.
If they still jitter, try the one configuration servo with a better power supply. Be careful: most servos need 4.8V as the input voltage, so do not go over that voltage since you can damage them or significantly reduce their life time.
If you have doubts about the potentiometer itself, you can also try the one servo configuration without potentiometer at all, just hardcoding values in the servo.write() method; something like this:
void loop() {
delay(500);
Bicep.write(50);
delay(500);
Bicep.write(100);
}
That will reduce any potential problem to a software only problem, since now, according to your description, it's not clear wether you are facing an electronic configuration problem or a software bug.

Related

Multiple thermocouples on raspberry pi - NAN reading when in electrical contact

I have 2 K-type thermocouples running on individual MAX31855 boards on a Raspberry Pi 3.
The MAX31855 boards share a CLK pin but have separate CS and DO pins, similar to the set up given here:
Multiple thermocouples on raspberry pi
Everything works great until i place both thermocouples on a metal surface which causes both thermocouple readings to be "NAN". I guess its a grounding issue? Is there a way to solve this?
Thanks in advance
You can glue them with epoxy to the devices you are monitoring. Thermo-couples should not be connected together electrically and it sounds as though you may have grounded them. Also, I don't know how the MAX works precisely, but in order to be able to read negative temperatures, the thermo-couples are probably biased UP slightly (maybe 10mV).
If you ground them this means they will read a voltage that translates as -10mV which would be a temperature lower than absolute zero. Hence, NaN, i.e. not a number.
I do know how the TH7 works and the python code for driving that (its a raspberry pi thing) is here https://github.com/robin48gx/TH7

OSX AudioUnit SMP

I'd like to know if someone has experience in writing a HAL AudioUnit rendering callback taking benefits of multi-core processors and/or symmetric multiprocessing?
My scenario is the following:
A single audio component of sub-type kAudioUnitSubType_HALOutput (together with its rendering callback) takes care of additively synthesizing n sinusoid partials with independent individually varying and live-updated amplitude and phase values. In itself it is a rather straightforward brute-force nested loop method (per partial, per frame, per channel).
However, upon reaching a certain upper limit for the number of partials "n", the processor gets overloaded and starts producing drop-outs, while three other processors remain idle.
Aside from general discussion about additive synthesis being "processor expensive" in comparison to let's say "wavetable", I need to know if this can be resolved right way, which involves taking advantage of multiprocessing on a multi-processor or multi-core machine? Breaking the rendering thread into sub-threads does not seem the right way, since the render callback is already a time-constraint thread in itself, and the final output has to be sample-acurate in terms of latency. Has someone had positive experience and valid methods in resolving such an issue?
System: 10.7.x
CPU: quad-core i7
Thanks in advance,
CA
This is challenging because OS X is not designed for something like this. There is a single audio thread - it's the highest priority thread in the OS, and there's no way to create user threads at this priority (much less get the support of a team of systems engineers who tune it for performance, as with the audio render thread). I don't claim to understand the particulars of your algorithm, but if it's possible to break it up such that some tasks can be performed in parallel on larger blocks of samples (enabling absorption of periods of occasional thread starvation), you certainly could spawn other high priority threads that process in parallel. You'd need to use some kind of lock-free data structure to exchange samples between these threads and the audio thread. Convolution reverbs often do this to allow reasonable latency while still operating on huge block sizes. I'd look into how those are implemented...
Have you looked into the Accelerate.framework? You should be able to improve the efficiency by performing operations on vectors instead of using nested for-loops.
If you have vectors (of length n) for the sinusoidal partials, the amplitude values, and the phase values, you could apply a vDSP_vadd or vDSP_vmul operation, then vDSP_sve.
As far as I know, AU threading is handled by the host. A while back, I tried a few ways to multithread an AU render using various methods, (GCD, openCL, etc) and they were all either a no-go OR unpredictable. There is (or at leas WAS... i have not checked recently) a built in AU called 'deferred renderer' I believe, and it threads the input and output separately, but I seem to remember that there was latency involved, so that might not help.
Also, If you are testing in AULab, I believe that it is set up specifically to only call on a single thread (I think that is still the case), so you might need to tinker with another test host to see if it still chokes when the load is distributed.
Sorry I couldn't help more, but I thought those few bits of info might be helpful.
Sorry for replying my own question, I don't know the way of adding some relevant information otherwise. Edit doesn't seem to work, comment is way too short.
First of all, sincere thanks to jtomschroeder for pointing me to the Accelerate.framework.
This would perfectly work for so called overlap/add resynthesis based on IFFT. Yet I haven't found a key to vectorizing the kind of process I'm using which is called "oscillator-bank resynthesis", and is notorious for its processor taxing (F.R. Moore: Elements of Computer Music). Each momentary phase and amplitude has to be interpolated "on the fly" and last value stored into the control struct for further interpolation. Direction of time and time stretch depend on live input. All partials don't exist all the time, placement of breakpoints is arbitrary and possibly irregular. Of course, my primary concern is organizing data in a way to minimize the number of math operations...
If someone could point me at an example of positive practice, I'd be very grateful.
// Here's the simplified code snippet:
OSStatus AdditiveRenderProc(
void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
// local variables' declaration and behaviour-setting conditional statements
// some local variables are here for debugging convenience
// {... ... ...}
// Get the time-breakpoint parameters out of the gen struct
AdditiveGenerator *gen = (AdditiveGenerator*)inRefCon;
// compute interpolated values for each partial's each frame
// {deltaf[p]... ampf[p][frame]... ...}
//here comes the brute-force "processor eater" (single channel only!)
Float32 *buf = (Float32 *)ioData->mBuffers[channel].mData;
for (UInt32 frame = 0; frame < inNumberFrames; frame++)
{
buf[frame] = 0.;
for(UInt32 p = 0; p < candidates; p++){
if(gen->partialFrequencyf[p] < NYQUISTF)
buf[frame] += sinf(phasef[p]) * ampf[p][frame];
phasef[p] += (gen->previousPartialPhaseIncrementf[p] + deltaf[p]*frame);
if (phasef[p] > TWO_PI) phasef[p] -= TWO_PI;
}
buf[frame] *= ovampf[frame];
}
for(UInt32 p = 0; p < candidates; p++){
//store the updated parameters back to the gen struct
//{... ... ...}
;
}
return noErr;
}

DCF77 decoder vs. noisy signal

I have almost completed my open source DCF77 decoder project. It all started out when I noticed that the standard (Arduino) DCF77 libraries perform very poorly on noisy signals. Especially I was never able to get the time out of the decoders when the antenna was close to the computer or when my washing machine was running.
My first approach was to add a (digital) exponential filter + trigger to the incoming signal.
Although this improved the situation significantly, it was still not really good. Then I started to read some standard books on digital signal processing and especially the original works of Claude Elwood Shannon. My conclusion was that the proper approach would be to not "decode" the signal at all because it is (except for leap seconds) completely known a priori. Instead it would be more appropriate to match the received data to a locally synthesized signal and just determine the proper phase. This in turn would reduce the effective bandwidth by some orders of magnitude and thus reduce the noise significantly.
Phase detection implies the need for fast convolution. The standard approach for efficient convolution is of course the fast Fourier transform. However I am implementing for the Arduino / Atmega 328. Thus I have only 2k RAM. So instead of the straightforward approach with FFT, I started stacking matched phase locked loop filters. I documented the different project stages here:
First try: exponential filter
Start of the better apprach: phase lock to the signal / seconds ticks
Phase lock to the minutes
Decoding minute and hour data
Decoding the whole signal
Adding a local clock to deal with signal loss
Using local synthesized signal for faster lock reacquisition after signal loss
I searched the internet quite extensively and found no similar approach. Still I wonder if there are similar (and maybe better) implementations. Or if there exist research on this kind of signal reconstruction.
What I am not searching for: designing optimized codes for getting close to the Shannon limit. I am also not searching for information on the superimposed PRNG code on DCF77. I also do not need hints on "matched filters" as my current implementation is an approximation of a matched filter. Specific hints on Viterbi Decoders or Trellis approaches are not what I am searching for - unless they address the issue of tight CPU and RAM constraints.
What I am searching for: are there any descriptions / implementations of other non-trivial algorithms for decoding signals like DCF77, with limited CPU and RAM in the presence of significant noise? Maybe in some books or papers from the pre internet era?
Have you considered using a chip matched filter to perform your convolution?
http://en.wikipedia.org/wiki/Matched_filter
They are almost trivially easy to implement, as each chip / bit period can be implemented as a n add subtract delay line ( use a circular buffer )
A simple one for a square wave (will also work, but less optimal with other waveforms) of unknown sequence (but known frequency) can be implemented something like this:
// Filter class
template <int samples_per_bit>
class matchedFilter(
public:
// constructor
matchedFilter() : acc(0) {};
// destructor
~matchedFilter() {};
int filterInput(int next_sample){
int temp;
temp = sample_buffer.insert(nextSample);
temp -= next_sample;
temp -= result_buffer.insert(temp);
return temp;
};
private:
int acc;
CircularBuffer<samples_per_bit> sample_buffer;
CircularBuffer<samples_per_bit> result_buffer;
);
// Circular buffer
template <int length>
class CircularBuffer(
public:
// constructor
CircularBuffer() : element(0) {
buffer.fill(0);
};
// destructor
~CircularBuffer(){};
int insert(int new_element){
int temp;
temp = array[element_pos];
array[element_pos] = new_element;
element_pos += 1;
if (element_pos == length){
element_pos = 0;
};
return temp;
}
private:
std::array<int, length> buffer;
int element_pos;
);
As you can see, resource wise, this is relatively trivial. It there is a specific waveform you're after, you can cascade these together to give a longer correlation.
The reference to matched filters by Ollie B. is not what I was asking for. I already covered this before in my blog.
However by now I received a very good hint by private mail. There exists a paper "Performance Analysis and
Receiver Architectures of DCF77 Radio-Controlled Clocks" by Daniel Engeler. This is the kind of stuff I am searching for.
With further searches starting from the Engeler paper I found the following German patents DE3733966A1 - Anordnung zum Empfang stark gestoerter Signale des Senders dcf-77 and DE4219417C2 - Schmalbandempfänger für Datensignale.

Simultaneous and random output pins with Arduino

I have a project that requires eight DIFFERENT lights to cycle between on and off at random times with random fade in, random fade out and random on/off durations. My strategy is to fade on, leave on for a random time, fade off, leave off for a random time, repeat. While right now I've got a random pin selected before each for loop, I'd like to use a for-loop to randomly choose a pin on which to run the WHOLE on/off cycle.
Here's my pseudocode. Or maybe it IS my code.
void setup() {
int pin = 0;
int fadeIn = 0;
int fadeOut = 0;
int onDuration = 0;
int offDuration = 0;
}
void loop() {
pin = random(2,8)
onDuration = random(2000,15000)
for (fadeIn=0;fadeIn<255;i++) {
analogWrite(pin,fadeIn)
}
delay(onDuration)
pin = random(2,8)
offDuration = random(1000,7000)
for (fadeOut=254;fadeOut>0;fadeOut--) {
analogWrite(pin,fadeOut)
}
delay(offDuration)
}
The loop (on, then off) would be one instance of a cycle. If I wanted a SECOND instance of the cycle to kick off on another pin WHILE the first cycle was running, is that something I can do programmatically? or would I need eight controllers, each fading the light in and out at the same time?
In your code above, the fading in and fading out are not for a random time. Is that what you intended? If so, you'll need to add delays at each iteration of the loops.
Anyway, this is something you can do without 8 separate boards.
Because it's embedded, you can't multithread very easily. You'll need to implement your own task scheduler and each LED has to be thought of as its own task. Then you just keep track of the state each different LED is in (fading in, on, fading out, or off.) As you bounce between the different tasks, control each LED accordingly to the state.
As far as a timing based task scheduler, you have different options. Perhaps the easiest is to implement a periodic timer interrupt. The AVR datasheets explain this pretty well. For Arduino, there's some libraries you can also use. For example:
http://playground.arduino.cc/code/timer1
Another option is to do something similar to this:
http://arduino.cc/forum/index.php?PHPSESSID=3e72433bc4375ee6c20d56f3998762ca&topic=5686.msg44073#msg44073
Just some suggestions. Sounds like an interesting project. Good luck!
You can manage this sort of thing (multiple lights/threads) using a procedural style of coding with switch/case statements and keeping track of state through some vars. With the limited memory on Arduino this is sometimes the only way to go.
On the other hand, I've had much success doing this sort of lighting control with object-orient approaches using custom classes and custom libraries. Much easier and your loop()then only need to handle higher level logic and service each instance (e.g. tell it to update and the instance handles the logic for that).
The only issue is the limited memory - so it may depend on which specific board you are using. I'd suggest giving it a try though - will likely be fine memory wise and you will learn quite a lot.

Data to audio and back. Modulation / demodulation with source code

I have a stream of binary data and want to convert it to raw waveform sound data, which I can send to the speakers.
This is what the old-school modems did in order to transfer binary data over the phone line (producing the typical modemish sound). It is called modulation.
Then I need a reverse process - from the raw waveform samples, I want to obtain the exact binary data. This is called demodulation.
Any bitrate will work for a start.
The sound is played using computer speakers and sampled using a microphone.
Bandwidth would be quite low (low quality microphone).
There is some background noise but not much.
I found one particular way to do this - Frequency shift keying. The problem is I can't find any source code.
Can you please point me to an implementation of FSK in any language?
Or offer any alternative encoding binary<->sound with available source code?
The simplest modulation scheme would be amplitude modulation (technically for the digital realm this would be called Amplitude Shift Keying). Take a fixed frequency (let's say 10Khz), your "carrier wave", and use the bits in your binary data to turn it on and off. If your data rate is 10 bits per second you will be toggling the 10KHz signal on and off at that rate. The demodulation would be an (optional) 10KHz filter followed by comparing with a threshold. This is a fairly simple scheme to implement. Generally the higher the signal frequency and your available bandwidth, the faster you can switch that signal on and off.
A very cool/fun application here would be to encode/decode as morse code and see how fast you can go.
FSK, shifting between two frequencies is more efficient in bandwidth and more immune to noise but will make the demodulator more complex as you need to distinguish between the two frequencies.
Advanced modulation scheme such as Phase Shift Keying are good at getting the highest bit rate for a given bandwidth and signal to noise ratio but they are more complicated to implement. Analog phone modems needed to deal with certain bandwidth (e.g. as little as 3Khz) and noise limitations. If you need to get the highest possible bitrate given bandwidth and noise limitations then is the way to go.
For actual code samples of advanced modulation schemes I would investigate application notes from DSP vendors (such as TI and Analog Devices) as those were common applications for DSPs.
Implementing a PI/4 Shift D-QPSK Baseband Modem Using the TMS320C50
QPSK modulation demystified
V.34 Transmitter and Receiver Implementation on the TMS320C50 DSP
Another very simple and not so efficient method is to use DTMF. Those are the tones generated by phone keypads where each symbol is a combination of two frequencies. If you Google you'll find a lot of source code. Depending on your application/requirements this may be a simple solution.
Let's dive in to some simple scheme implementation details, something like the morse code I mentioned earlier. We can use "dot" for 0 and "dash' for 1. An advantage of a morse like scheme is that it also solves the framing problem as you can resynchronize your sampling after every space. For simplicity let's pick the "Carrier Wave" frequency at 11KHz and assume your wave output is 44Khz, 16 bit, mono. We'll also use a square wave which will create harmonics but we don't really care. If 11KHz is beyond your microphone's frequency response then just divide all frequencies by 2 e.g. We'll pick some arbitrary level 10000 and so our "on" waveform looks like this:
{10000, 10000, 0, 0, 10000, 10000, 0, 0, 10000, 0, 0, ...} // 4 samples = 11Khz period
and our "off" waveform is just all zeros. I leave the coding of this part as an excersize to the reader.
And so we have something like:
const int dot_samples = 400; // ~10ms - speed up later
const int space_samples = 400; // ~10ms
const int dash_samples = 800; // ~20ms
void encode( uint8_t* source, int length, int16_t* target ) // assumes enough room in target
{
for(int i=0; i<length; i++)
{
for(int j=0; j<8; j++)
{
if((source[i]>>j) & 1) // If data bit is 1 we'll encode a dot
{
generate_on(&target, dash_samples); // Generate ON wave for n samples and update target ptr
}
else // otherwise a dash
{
generate_on(&target, dot_samples); // Generate ON wave for n samples and update target ptr
}
generate_off(&target, space_samples); // Generate zeros
}
}
}
The decoder is a bit more complicated but here's an outline:
Optionally band-pass filter the sampled signal around 11Khz. This will improve performance in a noisy enviornment. FIR filters are pretty simple and there are a few online design applets that will generate the filter for you.
Threshold the signal. Every value above 1/2 maximum amplitude is 1 every value below is 0. This assumes you have sampled the entire signal. If this is in real time you either pick a fixed threshold or do some sort of automatic gain control where you track the maximum signal level over some time.
Scan for start of dot or dash. You probably want to see at least a certain number of 1's in your dot period to consider the samples a dot. Then keep scanning to see if this is a dash. Don't expect a perfect signal - you'll see a few 0's in the middle of your 1's and a few 1's in the middle of your 0's. If there's little noise then differentiating the "on" periods from the "off" periods should be fairly easy.
Then reverse the above process. If you see dash push a 1 bit to your buffer, if a dot push a zero.
One purpose of modulation/demodulation is to adapt to channel characteristics. For instance, the channel might not be able to pass DC. Another purpose is to overcome a given amount and type of noise in the channel, while still transferring data above some given error rate.
For FSK, you simply want routines than can generate sine waves at two different frequencies on the transmit end, and filter and detect two different frequencies on the receiving end. The length of each segment of sine waves, the separation in frequency, and the amplitude will depend on the data rate and amount of noise you need to overcome.
In the simplest case, zero noise, simply produce N or 2N sine waves within successive fixed time frames. Something like:
x[i] = amplitude * sin( i * 2 * pi * (data[j] ? 1.0 : 2.0) * freq) / sampleRate )
On the receiving end, you can sample the signal at well above twice the max frequency and measure the distance between zero crossings, and see if you find an bunch of short period or long period waveforms. Much fancier methods using digital signal processing filters (IIR, FIR, etc.) and various statistic detectors can be used in the presence of non-zero noise.

Resources