MAX MSP // How can i controll visualisation with incoming audio signal? - max

do anyone work with MAX MSP and can help me?
how can i control videosignal with incoming audio signal. I would like to swith video input signal betwen 2 cameras, when the music changes. How can i read a frequency or bpm and by changing send signal to switch camera or visualisation? Have someone an idea? I would be glad to read a cople of ideas. Thanks

If you're using Max 5 or later, they have really great built in help files. For instance, when you open Max, go to Help -> Jitter Tutorials. I'd start by reading the first few just to get a handle of how jitter deals with data and matrices. Then specifically for video switching, read tutorial 8: Simple Mixing and 9: More mixing.
Then read tutorial 21: Working with Live Video and Audio Input, and then Tutorial 22: Working with Video output, that'll cover your input and outputs.
For audio analysis, you can either grab some prebuilt stuff in the Max toolbox website. MSP has similar tutorials to Jitter, I'd suggest reading the few introduction one, then you can jump to MSP Tutorial 6: A review of Fundamentals, Tutorial 25: Using the FFT, and Tutorial 23: Viewing Signal Data.
Those should get you started and on the right path.
Also check out the CNMAT set of externals, as there is a lot of really useful prebuilt stuff in there.

Related

Heart Rate 3 Click pulse oximeter (SFH7050) library for Arduino Uno

I'm trying to connect SFH7050 pulse oximeter to Arduino Uno. SFH7050 is connected via Heart Rate 3 Click board...
https://www.mikroe.com/heart-rate-3-click
...and I struggle to find suitable library for it. I have been searching for two weeks and I haven't found dedicated library yet. Tried MAX30100 library (heard it should work*) and it errors me on initialisation (with "begin"). There's probably something I'm missing here (probably some knowledge, duh), but I'm stuck for good
I'm quite new to Arduino in general, does anybody know, how to get this particular pulse oximeter going?
*I would gladly ask this person again about this, but it's sadly not possible

Looking for a ESP32 example using I2SClass for basic sound input

I'm looking for a simple example using and ESP32 module with a INMP441 microphone using the I2SClass. I can find plenty of examples using the C I2S library but none with the current I2SCLass library. Specifically interested in the basic startup and sampling. I want to be able to sample and return only samples that exceed a certain threshold. Thanks in advance for any help/pointers you can provide.

How to use ESP32 to generate HDMI

hi,
me and my friend are making a console and we would like it to output HDMI(preferably 1080p) using ESP32 but, after looking for ages we haven't found anything.
You can get a VGA card for it, but that is probably the limit of what can be achieved in terms of processing power.
However, it should be sufficient for retro-style consoles. For anything more, you'd need more powerful hardware.
The ESP32 does not produce video output of any kind. Have a look at the data sheet
Actually, you can produce video with the ESP. Look into what Bitluni has done. Also, there is others that have produced code for emulators all on the ESP32 with composite video and VGA outputs.
Youtube search for bitluni, or "esp32 composite video"
I have built several ESP32 clocks out of CRT TVs (color and black/white).

How do I output the current db level on Mac?

Hello and thanks for checking out my question,
I am working on a project analysing film and visualizing the data I got from it. I'm quite new at programming and only have some basic experience in java and javascript.
For my project I want to store the db levels of a movie in a csv file, to later work with the data in processing. I couldn't find anything that wasn't too complex for me to comprehend for Mac (OSX.)
Help would be much appreciated!
Thank you.
You're going to have to break your problem down into smaller steps.
Step 1: Generating the CSV file.
There are probably a million different ways to do this, and that can be pretty confusing. But break this down into smaller sub-steps and then take those steps one at a time. Can you get a movie playing in Processing? There is a Video library that does just that. Then can you get the volume level every X seconds? You might start with a separate sketch that just prints something to the console every X seconds. For getting the volume, you might try out the Minim library. If that doesn't work, Google is your friend, and remember to keep breaking your problem down into smaller steps!
Step 2: Loading the CSV file.
Now that you have the CSV file, you have to load it into Proccessing. There are several functions in the reference that might come in handy. Again, start with an example program that just prints the values to the console. Get that working perfectly before moving on.
Step 3: Visualizing the data.
Now that you have the data in your Processing code, you can start thinking about how you want to visualize the data. Maybe a line chart that just shows the volume over time just to start with.
If you get stuck on a specific step, then try to break it down into smaller sub-steps. Create an example program that just tests one of those smaller sub-steps (also known as an MCVE), and you'll be able to ask a more specific code-oriented question. Good luck, sounds like an interesting project!

NSSound-like framework that works, but doesn't require dealing with a steep learning curve

I've pretty much finished work on a white noise feature for one of my applications using NSSound to play a loop of 10 second AAC-encoded pre-recorded white noise.
[sound setLoops: YES]
should be all that's required, right?
It works like a charm but I've noticed that there is an audible pause between the sound file finishing and restarting.. a sort of "plop" sound. This isn't present when looping the original sound files and after an hour or so of trying to figure this out, I've come to the conclusion that NSSound sucks and that the audible pause is an artefact of the synchronisation of the private background thread playing the sound. It seems to be dependent on the main run loop somehow and this causes the audible gap between the end and restarting of the sound.
I know very little about sound stuff and this is a very minor feature, so I don't want to get into the depths of CoreAudio just to play a looping 10s sound fragment.. so I went chasing after a nice alternative, but nothing seems to quite fit:
Core Audio: total overkill, but at least a standard framework
AudioQueue: complicated, with C++ sample code!?
MusicKit/ SndKit: also huge learning curve, based on lots of open source stuff, etc.
I saw that AVFoundation on iOS 4 would be a nice way to play sounds, but that's only scheduled for Mac OS X 10.7..
Is there any easy-to-use way of reliably looping sound on Mac OS X 10.5+?
Is there any sample code for AudioQueue or Core Audio that takes the pain out of using them from an Objective-C application?
Any help would be very much appreciated..
Best regards,
Frank
Use QTKit. Create a QTMovie for the sound, set it to loop, and leave it playing.
Just for the sake of the archives.
QTKit also suffers from a gap between the end of one play through and start of the next one. It seems to be linked with re-initializing the data (perhaps re-reading it from disk?) in some way. It's a lot more noticeable when using the much smaller but highly compressed m4a format than when playing uncompressed aiff files but it's still there even so.
The solution I've found is to use Audio Queue Services:
http://developer.apple.com/mac/library/documentation/MusicAudio/Conceptual/AudioQueueProgrammingGuide/AQPlayback/PlayingAudio.html#//apple_ref/doc/uid/TP40005343-CH3-SW1
and
http://developer.apple.com/mac/library/samplecode/AudioQueueTools/Listings/aqplay_cpp.html#//apple_ref/doc/uid/DTS10004380-aqplay_cpp-DontLinkElementID_4
The Audio Queue calls a callback function which prepares and enqueues the next buffer, so when you reach the end of the current file you need to start again from the beginning. This gives completely gapless playback.
There's two gotchas in the sample code in the documentation.
The first is an actual bug (I'll contact DTS about this so they can correct it). Before allocating and priming the audio buffers, the custom structure must switch on playback otherwise the audio buffer never get primed and nothing is played:
aqData.mIsRunning = 1;
The second gotcha is that the code doesn't run in Cocoa but as a standalone tool, so the code connects the audio queue to a new run loop and actually implements the run loop itself as the last step of the program.
Instead of passing CFRunLoopGetCurrent(), just pass NULL which causes the AudioQueue to run in its own run loop.
result = AudioQueueNewOutput ( // 1
&aqData.mDataFormat, // 2
HandleOutputBuffer, // 3
&aqData, // 4
NULL, //CFRunLoopGetCurrent (), // 5
kCFRunLoopCommonModes, // 6
0, // 7
&aqData.mQueue // 8
);
I hope this can save the poor wretches trying to do this same thing in the future a bit of time :-)
Sadly, there is a lot of pain when developing audio applications on OS X. The learning curve is very steep because the documentation is fairly sparse.
If you don't mind Objective-C++ I've written a framework for this kind of thing: SFBAudioEngine. If you wanted to play a sound with my code here is how you could do it:
DSPAudioPlayer *player = new DSPAudioPlayer();
player->Enqueue((CFURLRef)audioURL);
player->Play();
Looping is also possible.

Resources