I try to get frequencies from microphone using Processing. I have mix two examples from the doc, but "highest" is not really in Hz (a is 440 Hz).
Do you know how to have something better than this ?
import ddf.minim.*;
import ddf.minim.analysis.*;
Minim minim;
AudioInput in;
FFT fft;
int highest=0;
void setup()
{
size(1024, 200, P2D);
minim = new Minim(this);
minim.debugOn();
in = minim.getLineIn(Minim.MONO, 4096, 44100);
fft = new FFT(in.left.size(), 44100);
}
void draw()
{
background(0);
stroke(255);
fft.forward(in.left);
highest=0;
for (int n = 0; n < fft.specSize(); n++) {
// draw the line for frequency band n, scaling it by 4 so we can see it a bit better
line(n/4, height, n/4, height - fft.getBand(n)*4);
//find frequency with highest amplitude
if (fft.getBand(n)>fft.getBand(highest))
highest=n;
}
println(highest);
//println(fft.getFreq(110));
// draw the waveforms
for (int i = 0; i < in.bufferSize() - 1; i++)
{
line(i, 50 + in.left.get(i)*50, i+1, 50 + in.left.get(i+1)*50);
line(i, 150 + in.right.get(i)*50, i+1, 150 + in.right.get(i+1)*50);
}
}
void stop()
{
// always close Minim audio classes when you are done with them
in.close();
minim.stop();
super.stop();
}
You need to do a bit of conversion, depending what you want to get:
The spectrum does not represent individual frequencies, but actually represents frequency bands centered on particular frequencies. The center frequency of each band is usually expressed as a fraction of the sampling rate of the time domain signal and is equal to the index of the frequency band divided by the total number of bands. The total number of frequency bands is usually equal to the length of the time domain signal, but access is only provided to frequency bands with indices less than half the length, because they correspond to frequencies below the Nyquist frequency. In other words, given a signal of length N, there will be N/2 frequency bands in the spectrum.
As an example, if you construct an FFT with a timeSize of 1024 and and
a sampleRate of 44100 Hz, then the spectrum will contain values for
frequencies below 22010 Hz, which is the Nyquist frequency (half the
sample rate). If you ask for the value of band number 5, this will
correspond to a frequency band centered on 5/1024 * 44100 =
0.0048828125 * 44100 = 215 Hz. The width of that frequency band is equal to 2/1024, expressed as a fraction of the total bandwidth of the
spectrum. The total bandwith of the spectrum is equal to the Nyquist
frequency, which in this case is 22100, so the bandwidth is equal to
about 50 Hz. It is not necessary for you to remember all of these
relationships, though it is good to be aware of them. The getFreq
allows you to query the spectrum with a frequency in Hz and the method
getBandWidth will return the bandwidth in Hz of each frequency band in
the spectrum.
from the Minim Manual, FFT section.
Related
This is from a sample program for OpenCL programming.
I am confused about how global and local work size are computed.
They are computed based on the image size.
Image size is 1920 x 1080 (w x h).
What I assumed is global_work_size[0] and global_work_size[1] are grids on image.
But now global_work_size is {128, 1088}.
Then local_work_size[0] and local_work_size[1] are grids on global_work_size.
local_work_size is {128, 32}.
But total groups, num_groups = 34, it is not 128 x 1088.
Max workgroup_size available at device is 4096.
How is the image distributed into such global and local work group sizes?
They are calculated in the following function.
clGetKernelWorkGroupInfo(histogram_rgba_unorm8, device, CL_KERNEL_WORK_GROUP_SIZE, sizeof(size_t), &workgroup_size, NULL);
{
size_t gsize[2];
int w;
if (workgroup_size <= 256)
{
gsize[0] = 16;//workgroup_size is formed into row & col
gsize[1] = workgroup_size / 16;
}
else if (workgroup_size <= 1024)
{
gsize[0] = workgroup_size / 16;
gsize[1] = 16;
}
else
{
gsize[0] = workgroup_size / 32;
gsize[1] = 32;
}
local_work_size[0] = gsize[0];
local_work_size[1] = gsize[1];
w = (image_width + num_pixels_per_work_item - 1) / num_pixels_per_work_item;//to include all pixels, num_pixels_per_work_item is added first
global_work_size[0] = ((w + gsize[0] - 1) / gsize[0]);//col
global_work_size[1] = ((image_height + gsize[1] - 1) / gsize[1]);//row
num_groups = global_work_size[0] * global_work_size[1];
global_work_size[0] *= gsize[0];
global_work_size[1] *= gsize[1];
}
err = clEnqueueNDRangeKernel(queue, histogram_rgba_unorm8, 2, NULL, global_work_size, local_work_size, 0, NULL, NULL);
if (err)
{
printf("clEnqueueNDRangeKernel() failed for histogram_rgba_unorm8 kernel. (%d)\n", err);
return EXIT_FAILURE;
}
I don't see any great mystery here. If you follow the calculation, the values do indeed end up as you say. (Not that the group size is particularly efficient in my opinion.)
If workgroup_size is indeed 4096, gsize will end up as { 128, 32 } as it follows the else logic. (>1024)
w is the number of num_pixels_per_work_item = 32 wide columns, or the minimum number of work-items to cover the entire width, which for an image width of 1920 is 60. In other words, we require an absolute minimum of 60 x 1080 work-items to cover the entire image.
Next, the number of group columns and rows is calculated and temporarily stored in global_work_size. As group width has been set to 128, a w of 60 means we end up with 1 column of groups. (This seems a waste of resources, more than half of the 128 work-items in each group will not be doing anything.) The number of group rows is simply image_height divided by gsize[1] (32) and rounding up. (33.75 -> 34)
Total number of groups can now be determined by multiplying out the grid: num_groups = global_work_size[0] * global_work_size[1]
To get the true total number of work-items in each dimension, each dimension of global_work_size is now multiplied by the group size in this dimension. 1, 34 multiplied by 128, 32 yields 128, 1088.
This actually covers an area of 4096 x 1088 pixels so about 53% of that is wastage. This is mainly because the algorithm for group dimensions favours wide groups, and each work-item works on a 32x1 pixel slice of the image. It would be better to favour tall work groups to reduce the amount of rounding.
For example, if we reverse gsize[0] and gsize[1], in this case we'd get a group size of { 32, 128 }, giving us a global work size of { 64, 1152 } and only 12% wastage. It would also be worth checking if always picking the largest possible group size is even a good idea; it quite possibly isn't, but I've not looked into the kernel's computation in detail, let alone run any measurements, to say if that's the case or not.
I have NRG #40 wind speed sensor the output frequency is linear with wind speed
output signal range from 0 Hz to 125 Hz
0 Hz mean =0.35 m/s and 125 Hz =96 m/s and transfer function is
m/s = (Hz x 0.765) + 0.35
How can I interface this sensor with a Arduino mega
previously I connect Adafruit (product ID: 1733) which is output voltage not frequency is linear with wind speed
and this code for Adafruit :
//Setup Variables
const int sensorPin = A0; //Defines the pin that the anemometer output is connected to
int sensorValue = 0; //Variable stores the value direct from the analog pin
float sensorVoltage = 0; //Variable that stores the voltage (in Volts) from the anemometer being sent to the analog pin
float windSpeed = 0; // Wind speed in meters per second (m/s)
float voltageConversionConstant = .004882814; //This constant maps the value provided from the analog read function, which ranges from 0 to 1023, to actual voltage, which ranges from 0V to 5V
int sensorDelay = 1000; //Delay between sensor readings, measured in milliseconds (ms)
//Anemometer Technical Variables
//The following variables correspond to the anemometer sold by Adafruit, but could be modified to fit other anemometers.
float voltageMin = .4; // Mininum output voltage from anemometer in mV.
float windSpeedMin = 0; // Wind speed in meters/sec corresponding to minimum voltage
float voltageMax = 2.0; // Maximum output voltage from anemometer in mV.
float windSpeedMax = 32; // Wind speed in meters/sec corresponding to maximum voltage
void setup()
{
Serial.begin(9600); //Start the serial connection
}
void loop()
{
sensorValue = analogRead(sensorPin); //Get a value between 0 and 1023 from the analog pin connected to the anemometer
sensorVoltage = sensorValue * voltageConversionConstant; //Convert sensor value to actual voltage
//Convert voltage value to wind speed using range of max and min voltages and wind speed for the anemometer
if (sensorVoltage <= voltageMin){
windSpeed = 0; //Check if voltage is below minimum value. If so, set wind speed to zero.
}else {
windSpeed = (sensorVoltage - voltageMin)*windSpeedMax/(voltageMax - voltageMin); //For voltages above minimum value, use the linear relationship to calculate wind speed.
}
//Print voltage and windspeed to serial
Serial.print("Voltage: ");
Serial.print(sensorVoltage);
Serial.print("\t");
Serial.print("Wind speed: ");
Serial.println(windSpeed);
delay(sensorDelay);
}
Asuming you use a Arduino UNO or Nano, a easy way is to connect the sensor to pin D2 or D3, witch can be used as Interrupt pins.
You then make a function, or a ISR, that gets called every time the sensor pulses. Then you attach the newly created function to the Interrupt pin.
So it will look something like this.
byte sensorPin = 2;
double pulses = 0;
double wSpeed = 0;
long updateTimer = 0;
int updateDuration = 3000;
void setup() {
Serial.begin(115200);
pinMode(sensorPin, INPUT_PULLUP);
attachInterrupt(digitalPinToInterrupt(sensorPin), sensorISR, FALLING);
}
void loop() {
long now = millis();
if(updateTimer < now) {
updateTimer = now + updateDuration;
wSpeed = ((pulses/(updateDuration/1000)) * 0.765) + 0.35;
pulses = 0;
Serial.println("Windspeed is:" + String(wSpeed));
}
}
void sensorISR() {
pulses++;
}
The ISR functions only job is to increment the pulses variable for every pulse. Then every second you can calculate the frequency and speed. If you wait 3 second instead, like above, you will have a better resolution but will have to account for the extra time in the equation.
I have not testet this code.
Scenario:
I m experimenting the thermocouple amplifier (SN-6675) with Arduino DUE.
After i'm included MAX6675 library, Arduino can measured room temperature.
However, Temp measured from arduino have 2 issues,
1) offset compare to "Fluke thermometer"
2) have tons of noise, and keep fluctuated after taking average of each 5 temperature sample.
eg, Fluke thermometer got 28.9C at room temp, arduino got 19.75~45.75C
Question: Any method/filter to reduce the measured noise, and gives a steady output?
Codes is attached for reference.
#include <MAX6675.h>
//TCamp Int
int CS = 7; // CS pin on MAX6675
int SO = 8; // SO pin of MAX6675
int SCKpin = 6; // SCK pin of MAX6675
int units = 1; // Units to readout temp (0 = ˚F, 1 = ˚C)
float error = 0.0; // Temperature compensation error
float tmp = 0.0; // Temperature output variable
//checking
int no = 0;
MAX6675 temp0(CS,SO,SCKpin,units,error); // Initialize the MAX6675 Library for our chip
void setup() {
Serial.begin(9600); // initialize serial communications at 9600 bps:
}
void loop() {
no= no + 1;
tmp = temp0.read_temp(5); // Read the temp 5 times and return the average value to the var
Serial.print(tmp);
Serial.print("\t");
Serial.println(no);
delay(1000);
}
Any method/filter to reduce the measured noise, and gives a steady output?
The Kalman filter is pretty much the standard method for this:
Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, containing noise (random variations) and other inaccuracies, and produces estimates of unknown variables that tend to be more precise than those based on a single measurement alone.
If your background isn't maths, don't be put off by the formulas that you come across. In the single-variable case like yours, the filter is remarkably easy to implement, and I am sure googling will find a few implementations.
The filter will give you an estimate of the temperature as well an estimate of the variance of the temperature (the latter gives you an idea about how confident the filter is about its current estimate).
You may want to go for a simpler averaging algorithm. This is not as elegant as a low pass algorithm but may be adequate for your case. These algorithms are plentiful on the web.
You can monkey around with the number of samples you take to balance the compromise between latency and stability. You may want to start with 10 samples and work from there.
I have very minimal programming experience.
I would like to write a program that will generate and save as a gif image every possible image that can be created using only black and white pixels in 640 by 360 px dimensions.
In other words, each pixel can be either black or white. 640 x 360 = 230,400 pixels. So I believe total of 460,800 images are possible to be generated (230,400 x 2 for black/white).
I would like a program to do this automatically.
Please help!
First to answer your questions. Yes there will be writings on "some" pictures. Actually ever text written by human which fits in 640x360 pixels will show up. Also every other text (text not yet written or text that never will be written). Also you will see pictures of every human which is, was or will be alive. See Infinite Monkey Theorem for further information.
The code to create your wanted gif is fairly easy. I used Java for this. Note that you need an extra class: AnimatedGifEncoder. The Code is not memory-bound because the AanimatedGifEncoder will write each image to disk as soon it is computed. But make sure that you have enough disk space available.
import java.awt.Color;
import java.awt.image.BufferedImage;
public class BigPicture {
private final int width;
private final int height;
private final int WHITE = Color.WHITE.getRGB();
private final int BLACK = Color.BLACK.getRGB();
public BigPicture(int width, int height) {
this.width = width;
this.height = height;
}
public void process(String outFile) {
AnimatedGifEncoder gif = new AnimatedGifEncoder();
gif.setSize(width, height);
gif.setTransparent(null); // no transparency
gif.setRepeat(-1); // play only once
gif.setDelay(0); // 0 ms delay between images,
// 'cause ain't nobody got time for that!
gif.start(outFile);
BufferedImage bufferedImage = new BufferedImage(width, height, BufferedImage.TYPE_BYTE_BINARY);
// set the image to all white
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
bufferedImage.setRGB(x, y, WHITE);
}
}
// add white image
gif.addFrame(bufferedImage);
// add all other combinations
while (increase(bufferedImage)) {
gif.addFrame(bufferedImage);
}
gif.finish();
}
/**
* #param bufferedImage
* the image to increase
* #return false if last pixel set to black => image is complete black
*/
private boolean increase(BufferedImage bufferedImage) {
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
if (bufferedImage.getRGB(x, y) == WHITE) {
bufferedImage.setRGB(x, y, BLACK);
return true;
}
bufferedImage.setRGB(x, y, WHITE);
}
}
return false;
}
public static void main(String[] args) {
new BigPicture(640, 360).process("C:\\temp\\bigpicture.gif");
System.out.println("finished.");
}
}
Please be aware that this will take some time. So don't bother waiting and enjoy your life instead! ;)
EDIT: Since my solution is a bit unclear i will explain the algorithm.
I have defined a method called increase. This method takes the BufferedImage and changes the bit pattern of the image so that the next bit pattern appears. The method is just a bit addition. The method will return false if the image encounters the last bit pattern (all pixels are set to black).
As long as it is possible to increase the bit pattern (i.e. increase() returns true) we will save the image as new frame and increase the image again.
How the increase() method works: The method runs over the image first in x-direction then in y-direction. I assume that white pixels are 0 and black pixels are 1. So, we want to take the bit pattern of the image and add 1. We inspect the first pixel: if it is white (0) we can add 1 without an overflow so we turn the pixel to black (0 + 1 = 1 => black pixel). After that we return from the method because we want to increase only one position. It returns true because an increase was possible. If we encounter a black pixel we have an overflow (1 + 1 = 2 or in binary 10). So we have to set the current pixel to white and add the 1 to the next pixel. This will continue until we find the first white pixel.
example:
first we create a print method: this method prints the image as binary number. Attention the number is reversed and the most significant bit is the bit on the right side.
public void print(BufferedImage bufferedImage) {
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
if (bufferedImage.getRGB(x, y) == WHITE) {
System.out.print(0); // white pixel
} else {
System.out.print(1); // black pixel
}
}
}
System.out.println();
}
now we modify our main-while loop:
print(bufferedImage); // this one prints the empty image
while (increase(bufferedImage)) {
print(bufferedImage);
}
and now set some short example to test:
new BigPicture(1, 5).process("C:\\temp\\bigpicture.gif");
and finally the output:
00000 // 0 this is the first print before the loop -> "white image"
10000 // 1 the first white pixel is set to black
01000 // 2 the first overflow, so the second pixel is set to black "2"
11000 // 3
00100 // 4
10100 // 5
01100
11100
00010 // 8
10010
01010
11010
00110
10110
01110
11110
00001 // 16
10001
01001
11001
00101
10101
01101
11101
00011
10011
01011
11011
00111
10111
01111
11111 // 31 == 2^5 - 1
finished.
In other words, each pixel can be either black or white. 640 x 360 =
230,400 pixels. So I believe total of 460,800 images are possible to
be generated (230,400 x 2 for black/white).
There is a little flaw in your belief. You are right about the number of pixels: 230,400. Unfortunately, this means there are not 2 * 230,400, but 2 ^ 230,400 possible pictures, which is a number with more than 60,000 digits (longer than the allowed answer size, I am afraid). For comparison a particular number with 45 digits signifies the diameter of the observable universe in centimeters (roughly the width of a pinkie).
In order to understand why your computation of the number of pictures is wrong consider this example: if your pictures contained only three pixels, you could have 8 different pictures (2 ^ 3), rather than 6 (2 * 3). Here are all of them: BBB, BBW, BWB, BWW, WBB, WBW, WWB, WWW. Adding another pixel doubles the size of possible pictures because you can have it white for all the 3-pixel cases, or black for all the 3-pixel cases. Doubling 1 (which is the amount of pictures you can have with 0 pixels) 230,400 times gives you 2 ^ 230,400.
It's great that there is a bounty for the question, but it is rather distracting and counter-productive if it was just as an April's Fool joke.
I'm going to go ahead and pinch some code from a related question, just for fun.
from itertools import product
for matrix in product([0, 1], repeat=(math,pow(2,230400)):
# render and save your .gif
As all the comments have already stated, good luck!
On a more serious note, if you didn't want to be absolutely sure that you had all permutations, you could generate a random 640x360 matrix and store it as an image.
Perform this action say 100k times, and you'll have at least an interesting set of pictures to look at, but it's unfeasible to get every possible permutation.
You could then delete all identical files to reduce the set to just the unique images.
I'm trying to create a simple function that will decrease audio volume in a buffer (like a fade out) each iteration through the buffer. Here's my simple function.
double iterationSum = 1.0;
double iteration(double sample)
{
iterationSum *= 0.9;
//and then multiply that sum with the current sample.
sample *= iterationSum;
return sample;
}
This works fine when set to a 44100 kHz samplerate but the problem I'm having is that if the samplerate is for an example changed to 88200 kHz it should only reduce the volume half that step each time because the samplerate is twice as much and will otherwise end the "fade out" in halftime, and I've tried to use a factor like 44100 / 88200 = 0.5 but this will not make it half the step in any way.
I'm stuck with this simple problem and need a guide to lead me through, what can I do to make it half step in each iteration as this function is called if the samplerate is changed during programtime?
Regards, Morgan
The most robust way to fade out independent of sample rate is to keep track of the time since the fadeout started, and use an explicit fadeout(time) function.
If for some reason you can't do that, you can set your exponential decay rate based on the sample rate, as follows:
double decay_time = 0.01; // time to fall to ~37% of original amplitude
double sample_time = 1.0 / sampleRate;
double natural_decay_factor = exp(- sample_time / decay_time);
...
double iteration(double sample) {
iterationSum *= natural_decay_factor;
...
}
The reason for the ~37% is because exp(x) = e^x, where e is the "natural log" base, and 1/e ~ 0.3678.... If you want a different decay factor for your decay time, you need to scale it by a constant:
// for decay to 50% amplitude (~ -6dB) over the given decay_time:
double halflife_decay_factor = exp(- log(2) * sample_time / decay_time);
// for decay to 10% amplitude (-20dB) over the given decay_time:
double db20_decay_factor = exp(- log(10) * sample_time / decay_time);
im not sure if i understood, but what about something like this:
public void fadeOut(double sampleRate)
{
//run 1 iteration per sec?
int defaultIterations=10;
double decrement = calculateIteration(sampleRate, defaultIterations);
for(int i=0; i < defaultIterations; i++)
{
//maybe run each one of these loops every x ms?
sampleRate = processIteration(sampleRate, decrement);
}
}
public double calculateIteration(double sampleRate, int numIterations)
{
return sampleRate/numIterations;
}
private double processIteration(double sampleRate, double decrement)
{
return sampleRate -= decrement;
}