As I was writing a prototype for a voice-driven user interface, I ran into a wall. I was convinced that I could analyze a sound’s frequency spectrum with ActionScript 3. It turn out that the awesome SoundMixer.computeSpectrum(), which implements the Fast-Fourier Transformation, can only sample currently played sounds. It cannot be supplied a ByteArray, such as microphone data.
I do not see any reason why feeding a ByteArray to SoundMixer.computeSpectrum() would not be allowed. I would appreciate if Adobe would supply the SoundMixer’s source code to the public so that I may implement the missing feature, instead of reverse engineering the whole class.
Edit: Robin Millette found a library packaged as a SWC that I could easily inject into my prototype. Here is the app and source.
Try to speak, whiste, blow into the microphone or clap your hands to see different results. Every bar represents a half-octave.