Scanned synthesis

Scanned synthesis is a technique for animating wave tables and controlling them in real-time. Developed by Bill Verplank, Rob Shaw, and Max Mathews between 1998 and 1999 at Interval Research, Inc., it is based on the psychoacoustics of how we hear and appreciate timbres and on our motor control (haptic) abilities to manipulate timbres during live performance

Scanned synthesis involves a slow dynamic system whose frequencies of vibration are below about 15 Hz. The ear cannot hear the low frequencies of the dynamic system. So, to make audible frequencies, the "shape" of the dynamic system, along a closed path, is scanned periodically. The "shape" is converted to a sound wave whose pitch is determined by the speed of the scanning function. Pitch control is completely separate from the dynamic system control. Thus timbre and pitch are independent. This system can be looked upon as a dynamic wave table. The model can be compared to a slowly vibrating string, or a two dimensional surface obeying the wave equation.

The following implementations of scanned synthesis are freely available:


 * Csound features the scanu and scans opcodes developed by Paris Smaragdis. This was the first publicly available implementation of scanned synthesis.
 * Pure Data features examples of scanned synthesis via the pmpd library
 * Common Lisp Music in circular-scanned.clm
 * Scanned Synth VST from Humanoid Sound Systems was the first VST implementation of scanned synthesis, first released in March 2006 and still being actively developed. It is available from the Humanoid Sound Systems web site.
 * ScanSynthGL is another VST implementation of scanned synthesis by mdsp of Smartelectronix, also first released in March 2006. It is available from the KVRAudio forum. There is an unreleased beta version, some audio samples and a screenshot but no public version has been released yet.