My code is based on this example and a stop function is already implemented, there are functions to uninitialize and stop the audio unit:
AudioOutputUnitStop(toneUnit);
AudioUnitUninitialize(toneUnit);
AudioComponentInstanceDispose(toneUnit);
toneUnit = nil;
In the example I linked to a pause function is not necessary since there's only one frequency being played so there's no difference between pause and stop. In my implementation however, I'm playing a range of different frequencies and I want to be able to pause playback.
Any ideas how this is done?
fade out (~10ms) the AUs' output, feed the output silence after the fade
remember the position you stopped reading your input signal
reset the AUs before you resume
resume from the position you recorded above (here, a fade in of the input signal to the AUs would be also be good)
You can try writing 0 to the buffer when you playback. It is effectively a pause.
Here's some code that you can use within the playback callback.
NSLog(#"Playing silence!");
for (int i = 0 ; i < ioData->mNumberBuffers; i++){
//get the buffer to be filled
AudioBuffer buffer = ioData->mBuffers[i];
UInt32 *frameBuffer = buffer.mData;
//loop through the buffer and fill the frames
for (int j = 0; j < inNumberFrames; j++){
frameBuffer[j] = 0;
}
}
Related
I'm trying to make a program that uses my webcam in processing 3.x that will filter green and blue, from an image and show either red and black or white. I've tried two ways that I think should work, but I must be missing something here because my whole screen just goes black when I switch to this red mode.
Attempt 1:
for(int y = 0; y < height; y++){
for(int x = 0; x < width; x++){
set(x,y,color(red(x+(y*width)),0,0));
}
}
Attempt 2:
// inputImage is the image on my screen of type PImage.
for(int c = 0; c < inputImage.pixels.length; c++){
inputImage.pixels[c] = color(red(c),0,0);
}
Both attempts resulted in a black screen.
Attempt 2 is clearer way to do it, so I'll address that.
You need to call loadPixels() on the inputImage before you write to the pixels[] array and finally call updatePixels() to apply write changes.
You had also called red() on c (the loop index), not on img.pixels[c].
Result
inputImage.loadPixels();
for (int c = 0; c < inputImage.pixels.length; c++) {
img.pixels[c] = color(red(img.pixels[c]), 0, 0);
}
inputImage.updatePixels();
If you need the code to be faster consider setting the color without 2 method calls (to color() and red()) with this bit-mask:
img.pixels[c] = img.pixels[c] & 0xFF0000;
Good evening,
I'm trying to implement a simple 1 channel ADC reader on a dspic33FJ128MC802, that manually starts sampling data, automatically converts when the sampling is done, and reads and stores data.
This has never been an issue for me, except with this microcontroller, which doesn't seem to have a normal ADC implemented,
I've read through the datasheet section on ADC several times, and I've configured it to my best ability, however the ADC1BUF0 value keeps jumping around inconsistently, between 0 and 4096, when I have a Lab power supply connected directly to the input pins of the ADC.
What I see is the ADC1BUF0 value seems to roughly correspond to the input voltage (0-3.3V), when I pause the debugger, it gives a couple (2-4) readings that are within range +-100 (out of 4096 is not bad).
Then if I continue to run and pause, with the voltage kept the same, the values stored in the buffer suddenly begin to jump +- 500, sometimes even showing 4095 (all 1's) and 0.
Then when I change the lab PSU to a different voltage, it seems to repeat the process of showing me a few correct values, then start to jump around again.
So essentially, it will show me a correct value about 1/2 of the times I pause the debugger.
I don't know what is causing this, I know that I need to run the debugger after changing voltage so that it can clear out the buffers, but something about this microcontroller seems definitely wrong.
Please let me know what can be done to fix this,
The compiler is XC16, IDE is Mplab 8.92
Thanks,
Below is my configuration:
[code]
void InitADC() {
TRISAbits.TRISA0=1;
AD1CON1bits.FORM = 0; // Data Output Format: integer//Signed Fraction (Q15 format)
AD1CON1bits.SSRC = 7; // Interan Counter (SAMC) ends sampling and starts convertion
AD1CON1bits.ASAM = 0; // ADC Sample Control: Sampling begins immediately after conversion
AD1CON1bits.AD12B = 1; // 12-bit ADC operation
AD1CON1bits.SIMSAM =1; // 10-bit ADC operation
AD1CON2bits.CHPS = 0; // Converts CH0
AD1CON2bits.CSCNA = 0; // Do not scan inputs
AD1CON2bits.VCFG = 0; // Use voltage reference Vss/Vdd
AD1CON2bits.ALTS = 0; // Always use input select for channel A
AD1CON2bits.BUFM = 0; // Always start filling at buffer 0
AD1CON3bits.ADRC = 0; // ADC Clock is derived from Systems Clock
AD1CON3bits.SAMC = 0; // Auto Sample Time = 0*Tad
AD1CON3bits.ADCS = 2; // ADC Conversion Clock Tad=Tcy*(ADCS+1)= (1/40M)*3 = 75ns (13.3Mhz)
// ADC Conversion Time for 10-bit Tc=12*Tab = 900ns (1.1MHz)
AD1CON1bits.ADDMABM = 1; // DMA buffers are built in conversion order mode
AD1CON2bits.SMPI = 0; // SMPI must be 0
AD1CON4bits.DMABL = 0; // Only 1 DMA buffer for each analog input
//AD1CHS0/AD1CHS123: A/D Input Select Register
AD1CHS0bits.CH0SA = 0; // MUXA +ve input selection (AIN0) for CH0
AD1CHS0bits.CH0NA = 0; // MUXA -ve input selection (Vref-) for CH0
AD1CHS123bits.CH123SA = 0; // MUXA +ve input selection (AIN0) for CH1
AD1CHS123bits.CH123NA = 0; // MUXA -ve input selection (Vref-) for CH1
IFS0bits.AD1IF = 0; // Clear the A/D interrupt flag bit
IEC0bits.AD1IE = 0; // Do Not Enable A/D interrupt
AD1CSSL = 1; //Scan from AN0 only
AD1PCFGL = 0b111111110; //Only AN0 in analog input mode
AD1CON1bits.ADON = 1; // Turn on the A/D converter
}
int main() {
ADPCFG = 0xFFFE; //make ADC pins all digital except AN0 (RA0)
while(1)
{
AD1CON1bits.SAMP = 1;
while(!AD1CON1bits.DONE);
myVoltage = ADC1BUF0;
}
return 0;
}
[/code]
Seems like I missed a semicolon after while(!AD1CON1bits.DONE)
Without the semicolon it did not wait for the conversion to complete.
I corrected this in the original post, in case someone wants to use the source in this post
Thank you,
I'm using the mbed platform to program a motion controller on an ARM MCU.
I need to determine the time at each iteration of a while loop, but am struggling to think of the best way to do this.
I have two potential methods:
1) Define how many iterations can be done per second and use "wait" so each iteration occurs after a regular interval. I can then increment a counter to determine time.
2) Capture system time before going into the loop and then continuously loop, subtracting current system time from original system time to determine time.
Am I thinking along the right tracks or have I completely missed it?
Your first option isn't ideal since the wait and counter portions will throw off the numbers and you will end up with less accurate information about your iterations.
The second option is viable depending on how you implement it. mbed has a library called "Timer.h" that would be an easy solution to your problem. The timer function is interrupt based (using Timer3 if you use a LPC1768) you can see the handbook here: mbed .org/ handbook /Timer. ARM supports 32-bit addresses as part of the Cortex-M3 processors, which means the timers are 32-bit int microsecond counters. What that means for your usability is that this library can keep time up to a maximum of 30 minutes so they are ideal for times between microseconds and seconds (if need more time than that then you will need a real-time clock). It's up to you if you want to know the count in milliseconds or microseconds. If you want micro, you will need to call the function read_us() and if you want milli you will use read_ms(). The utilization of the Timer interrupts will affect your time by 1-2 microseconds, so if you wish to keep track down to that level instead of milliseconds you will have to bear that in mind.
Here is a sample code for what you are trying to accomplish (based on an LPC1768 and written using the online compiler):
#include "mbed.h"
#include "Timer.h"
Timer timer;
Serial device (p9,p10);
int main() {
device.baud(19200); //setting baud rate
int my_num=10; //number of loops in while
int i=0;
float sum=0;
float dev=0;
float num[my_num];
float my_time[my_num]; //initial values of array set to zero
for(int i=0; i<my_num; i++)
{
my_time[i]=0; //initialize array to 0
}
timer.start(); //start timer
while (i < my_num) //collect information on timing
{
printf("Hello World\n");
i++;
my_time[i-1]=timer.read_ms(); //needs to be the last command before loop restarts to be more accurate
}
timer.stop(); //stop timer
sum=my_time[0]; //set initial value of sum to first time input
for(i=1; i < my_num; i++)
{
my_time[i]=my_time[i]-my_time[i-1]; //making the array hold each loop time
sum=sum+my_time[i]; //sum of times for mean and standard deviation
}
sum = sum/my_num; //sum of times is now the mean so as to not waste memory
device.printf("Here are the times for each loop: \n");
for(i=0; i<my_num; i++)
{
device.printf("Loop %d: %.3f\n", i+1, my_time[i]);
}
device.printf("Your average loop time is %.3f ms\n", sum);
for(int i=0; i<my_num; i++)
{
num[i]= my_time[i]-sum;
dev = dev +(num[i])*(num[i]);
}
dev = sqrt(dev/(my_num-1)); //dev is now the value of the standard deviation
device.printf("The standard deviation of your loops is %.3f ms\n", dev);
return 0;
}
Another option you can use are the SysTick timer functions which can be implemented similar to the functions seen above and it would make your code more portable to any ARM device with a Cortex-Mx since it's based on the system timer of the microprocessor (read more here: http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0497a/Babieigh.html). It really depends on how precise and portable you want your project to be!
Original source: http://community.arm.com/groups/embedded/blog/2014/09/05/intern-inquiry-95
i am working on entropy , i am getting consecutive frames from .mp4 file , i want to count the entropy of current frame with previous frame , if the entropy between them is not zero than it should check the frame , otherwise it should ignore the frame , it should save the previous frame and take the current frame after 2 sec, if entropy is zero it should ignore it and than again wait for 2 sec Here is my code :
capture.open("recog.mp4");
if (!capture.isOpened()) {
cerr << "can not open camera or video file" << endl;
}
while(1)
{
capture >> current_frame;
if (current_frame.empty())
break;
if (! previous_frame.empty()) {
subtract(current_frame, previous_frame, pre_img);
Mat hist;
int channels[] = {0};
int histSize[] = {32};
float range[] = { 0, 256 };
const float* ranges[] = { range };
calcHist( &pre_img, 1, channels, Mat(), // do not use mask
hist, 1, histSize, ranges,
true, // the histogram is uniform
false );
Mat histNorm = hist / (pre_img.rows * pre_img.cols);
double entropy = 0.0;
for (int i=0; i<histNorm.rows; i++)
{
float binEntry = histNorm.at<float>(i,0);
if (binEntry != 0.0)
{
entropy -= binEntry * log(binEntry);
}
else
{
//ignore the frame andgo for next , but how to code it ? is any function with ignore ?
}
waitKey(10);
current_frame.copyTo(previous_frame);
}
This is counting the entropy of only one image that is current image and it become previous image when the next image come into process , as far my page work told me. It give me error in log2 when i use it like this entropy -= binEntry * log2(binEntry); and can you please help me in telling that how to ignore the frame when the entropy is zero , so that .mp4 continue running and should i need to use cvwaitkey(2) to check .mp4 after 2 sec , mean .mp4is running but i am ignoring the frames
ignore mean when it subtract the current frame from the previous and entropy is 0, than previous frame remain previous , current not become previous , and previous wait 2sec for the next current image , and than perform the task on it
To ignore a certain amount of frames simply read them from the stream.
for(int i=0; i<60; i++)
capture >> current_frame;
If your video has 30fps this would skip 2 seconds of video.
To act in case your entropy is greater than a certain threshold you need to add something like this:
if ( entropy > 1.0 )
{
// do something
}
I used a threshold, because due to noise the entropy probably will never be zero between different frames.
If your compiler does not offer you the log2 function you can simply emulate it as described here.
Right, so let's say I want to compare two BitmapDatas. One is an image of a background (not solid, it has varying pixels), and another is of something (like a sprite) on top of the exact same background. Now what I want to do is remove the background from the second image, by comparing the two images, and removing all the pixels from the background that are present in the second image.
For clarity, basically I want to do this in AS3.
Now I came up with two ways to do this, and they both work perfectly. One compares pixels directly, while the other uses the BitmapData.compare() method first, then copies the appropriate pixels into the result. What I want to know is which way is faster.
Here are my two ways of doing it:
Method 1
for (var j:int = 0; j < layer1.height; j++)
{
for (var i:int = 0; i < layer1.width; i++)
{
if (layer1.getPixel32(i, j) != layer2.getPixel32(i, j))
{
result.setPixel32(i, j, layer2.getPixel32(i, j));
}
}
}
Method 2
result = layer1.compare(layer2) as BitmapData;
for (var j:int = 0; j < layer1.height; j++)
{
for (var i:int = 0; i < layer1.width; i++)
{
if (result.getPixel32(i, j) != 0x00000000)
{
result.setPixel32(i, j, layer2.getPixel32(i, j));
}
}
}
layer1 is the background, layer2 is the image the background will be removed from, and result is just a BitmapData that the result will come out on.
These are very similar, and personally I haven't noticed any difference in speed, but I was just wondering if anybody knows which would be faster. I'll probably use Method 1 either way, since BitmapData.compare() doesn't compare pixel alpha unless the colours are identical, but I still thought it wouldn't hurt to ask.
This might not be 100% applicable to your situation, but FWIW I did some research into this a while back, here's what I wrote back then:
I've been meaning to try out grant skinners performance test thing for a while, so this was my opportunity.
I tested the native compare, the iterative compare, a reverse iterative compare (because i know they're a bit faster) and finally a compare using blendmode DIFFERENCE.
The reverse iterative compare looks like this:
for (var bx:int = _base.width - 1; bx >= 0; --bx) {
for (var by:int = _base.height - 1; by >= 0; --by) {
if (_base.getPixel32(bx, by) != compareTo.getPixel32(bx, by)) {
return false;
}
}
}
return true;
The blendmode compare looks like this:
var test:BitmapData = _base.clone();
test.draw(compareTo, null, null, BlendMode.DIFFERENCE);
var rect:Rectangle = test.getColorBoundsRect(0xffffff, 0x000000, false);
return (rect.toString() != _base.rect.toString());
I'm not 100% sure this is completely reliable, but it seemed to work.
I ran each test for 50 iterations on 500x500 images.
Unfortunately my pixel bender toolkit is borked, so I couldn't try that method.
Each test was run in both the debug and release players for comparison, notice the massive differences!
Complete code is here: http://webbfarbror.se/dump/bitmapdata-compare.zip
You might try a shader that would check if two images are matching, and if not, save one of image's point as output. Like this:
kernel NewFilter
< namespace : "Your Namespace";
vendor : "Your Vendor";
version : 1;
description : "your description";
>
{
input image4 srcOne;
input image4 srcTwo;
output pixel4 dst;
void evaluatePixel()
{
float2 positionHere = outCoord();
pixel4 fromOne = sampleNearest(srcOne, positionHere);
pixel4 fromTwo = sampleNearest(srcTwo, positionHere);
float4 difference=fromOne-fromTwo;
if (abs(difference.r)<0.01&&abs(difference.g)<0.01&&abs(difference.b)<0.01) dst=pixel4(0,0,0,1);
else dst = fromOne;
}
}
This will return black pixel if matched and first image's pixel when no match. Use Pixel Bender to make it run in Flash. Some tutorial is here. Shaders are usually a factor faster than plain AS3 code.