User:Llebnodnarb/sandbox

Psychoacoustics
Psychoacoustics is a subcategory of Psychophysics (the study of psychological responses to physical stimuli). Psychoacoustics is the scientific study of the relation between sound, perception, and music psychology. Psychoacoustics can deal with everything from the physiology of the ear to the analysis of a musical performance.

The Human Ear
Understanding the human ear plays a critical role in understanding psychoacoustics. The human ear can only hear frequencies between about 20 Hz and 20 kHz. The frequencies below 20 Hz are known as infrasonic frequencies, and frequencies above 20 kHz are known as ultrasonic frequencies. The ear is divided into three main subdivisions: the outer ear, the middle ear, and the inner ear. Sound is captured by the pinna, the outer portion of the ear, which funnels sound vibrations into the middle ear. The middle ear transforms the acoustical vibrations into fluid vibrations understood by the cochlea. Two small muscles in the middle ear known as the tympanic membrane and the malleus convert varying sound pressure levels into vibrations sensed by the muscles of the inner ear. The inner ear then processes these vibrations, filtering and transducing them mechanically, hydrodynamically, and electrochemically, preparing the signal for the brain. The cochlea has two sub-chambers known as the scala vestibuili and the scala tympani filled with fluid known as perilymph that help carry the signal. The cochlear nucleus then sends the information to the central auditory pathway reaching the brain. These three parts of the human ear are collectively classified as the peripheral auditory system.

Sound Localization
Sound localization is the process of how we determine where a sound is originating from. The brain depends on the subtle differences in intensity, spectral, and timing cues to allow us to localize the source. Localization is dependent on three dimensional cues: the azimuth or horizontal angle, the zenith or vertical angle, and the distance (for static sounds) or velocity (for moving sounds). The azimuth of a sound is based off the difference in arrival times between the ears, the amplitude of high frequency sounds (the shadow effect), and the asymmetrical spectral reflections reflected from our pinna. The distance cues are based off the loss of detail, the loss of high frequencies, and the ratio of the direct signal to the reverberated signal. The velocity cue is described as a moving sound source. A common example of a velocity cue is the Doppler shift, which is the changing of pitch due to a moving sound source. Depending on where the source is located, our head is used as a barrier to change the timbre, intensity, and spectral qualities of the sound, helping the brain orient where the sound emanated from. These minute differences between the two ears is known as interaural cues. Lower frequencies with longer wave lengths, diffract the sound around the head forcing the brain to focus only on the phasing cues from the source. Helmut Haas discovered that we can discern the sound source despite additional reflections based off the earliest arriving sound. This principle is known as the Haas effect, or the precedence effect. Haas measured down to even a 1 millisecond difference in timing between the original sound and reflected sound increased the spaciousness, allowing the brain to discern the true location of the original sound. The nervous system combines all early reflections into a single perceptual whole allowing the brain to process multiple different sounds at once. The only exceptions being that the nervous system will only combine reflections that are within 35 milliseconds of each other and that have a similar intensity. Another subcategory of sound localization, is sound radiation. Every sound-generating object has a unique radiation pattern. A radiation pattern is a measurement that describes the amplitude of a sound projected in all directions. Radiation patterns are another clue to helping us locate the original sound.