So I'm trying to put a Delay, Tremolo, and other effects on a audio sample in Minim.
I can easily have these effects manipulate a generated signal using the documentation, but I can't seem to wrap my head around how to get the effects to work on audio samples.
Does anyone have an example or program I could use as reference? I would like to stay in minim as this is part of a larger project that uses minim for a bunch of other things.
Related
I'm trying to implement the Xilinx xfOpenCV stereovision pipeline explained at the bottom page here in Vivado HLS as standalone IP core (instead using accelerated flow). The stereo pipeline functions are based and very similar as the OpenCV ones.
So first I've collected a couple of images from an already calibrated stereo camera and I've simulated the xfOpenCV functions as standalone HW IP to make sure I've got the expected result, and after simulation the result is not perfect and it's got quite a lot of noise, for instance:
I went ahead and synthesize and implement the IP in the hardware (FPGA) to test it with a live stereo-camera stream. I've got the calibration parameters and are being used to correct the live stream frames before the 'stereo' function. (The calibration parameters also have been tested previously in simulation)
All works fine in terms of video flow, memory etc, but what I've got is a quite high level of noise (as expected from the simulation), this is a screen shot example of the camera live:
Any idea why this 'flickering' noise is generated? is it caused by the original images noise? what would it be the best approach or next steps to get rid of it (or smooth it)?
Thanks in advance.
I have kind of a proof-of-concept project ahead. I want to transmit a very short message to a smartphone via light.
What I have is:
a LED strip (NOT individually addressable)
an Arduino
a typical smartphone
8 Bytes I want to transmit
The obstacles:
not every smartphone camera works well under all light conditions (the recorded color is sometimes not the same as the one the human eye recognizes)
I don't have complete darkness around, but tough daylight :D.
I want to encode a message in a sequence of light, for example by varying color or pulse duration.
Are there any suitable encoding algorithms or libraries around that can be recommended and I should take a look at?
I'm new to audio programming so excuse me if I'm not using the right terms...
I have two streaming buffers that I want to have playing simultaneously completely synchronized. I want to control ratio of blending between the streams. I'm sure it's as simple as having two sources playing and just changing the their gain, but I read about people doing some tricks like having 2 channels buffer instead of two single channels. Then they play from a single source but control the blending between the channels. The article I read wasn't about OpenAL so my question is: Is this even possible with OpenAL?
I guess I don't have to do it this way but now I'm curious and want to learn how to set it up. Do I suppose to setup alFilter? Creative's documentation sais "Buffers containing more than one channel of data will be played without 3D spatialization." Reading this I guess I need a pre-pass on a buffer level and then having the source output blended mono channel signal.
I guess I'll ask another question. Is OpenAL flexible enough to do tricks like this?
I decode my stream manually so I realize how easy it will be to do the blending myself before feeding the buffer but then I won't be able in real time to change the blending factor since I already have a second or so of the stream buffered.
I have two streaming buffers that I want to have playing simultaneously completely synchronized.
I want to control ratio of blending between the streams. I'm sure it's as simple as having two sources playing and just changing the their gain
Yes, it should be. Did you try that? What was the problem?
ALuint source1;
ALuint source2;
...
void set_ratio(float ratio) {
ratio=std::min(ratio,1);
alSourcef (source1, AL_GAIN, ratio);
alSourcef (source2, AL_GAIN, (1-ratio));
}
I am porting a game from iPad to Mac.
Every time I start the game, certain set of sounds when they are being played, have an irritating noise at the end of playback, much like a short duration of heavy static noise.
Also the sounds which produce the noise is not the same on every execution. Each time a different set of sounds have the noise at the end.
Is there any OpenAL settings regarding to this situation to fix it.
Solutions tried:-
Tried to convert the mp3 files into some higher and lower bitrate format and tried to playback. Still noise persists.
It sounds like (get it?) you're passing in a buffer that is larger than your data, and the noise at the end is the result of attempting to interpret those bytes as sound.
I'm about to start a project that will record and edit audio files, and I'm looking for a good library (preferably Ruby, but will consider anything other than Java or .NET) for on-the-fly visualization of waveforms.
Does anybody know where I should start my search?
That's a lot of data to be streaming into a browser. Flash or Flex charts is probably the only solution that will be memory efficient. Javascript charting tends to break-down for large data sets.
When displaying an audio waveform, you will want to do some sort of data reduction on the original data, because there is usually more data available in an audio file than pixels on the screen. Most audio editors build a separate file (called a peak file or overview file) which stores a subset of the audio data (usually the peaks and valleys of a waveform) for use at different zoom levels. Then as you zoom in past a certain point you start referencing the raw audio data itself.
Here are some good articles on this:
Waveform Display
Build an Audio Waveform Display
As far as source code goes, I would recommend looking through the Audacity source code. Audacity's waveform display is pretty good and mostly likely does a similar sort of data reduction when rendering the waveforms.
i wrote one:
http://github.com/pangdudu/rude/tree/master/lib/waveform_narray_testing.rb
,nick
The other option is generating the waveforms on the server-side with GD or RMagick. But good luck getting RubyGD to compile.
Processing is often used for visualization, and it has a Ruby port:
https://github.com/jashkenas/ruby-processing/wiki