Monday, January 5, 2009

Music Shapes Our Worldview

Music, the art of sound, is perhaps the most truly self-expressive form of art. While visual art was conceived as an attempt to recreate the beauty we find all around us, music finds at its roots an attempt to externalize the self, to give utterance to our internal being. The beat projects the pulse of the heart out into the world. The voice gives utterance to the soul. But to say that music's beauty comes solely from the self would be far from the truth. Since music's beginning, there was a give and take between the external (the world) and the internal (the mind and soul). As one projects one's conception of beauty out into the world, so does one derive one's conception of beauty from the world which one is a part of. But as our sonic landscape is shaped more and more by humans, this give and take relationship becomes increasingly weighted towards the 'give' side of the exchange. The industrial revolution brought with it a cacophony of sounds that were complex, varied, unpredictable, uncontrollable, and loud. And these sounds were everywhere. The sound of machinery and its products became virtually inescapable. Indeed, the majority of the Earth's sound events were human-related, and these sounds had great impact on the musical world. The human ear, so overstimulated with noise, became disenchanted with traditional musical form, and composers had to compete with the noisy world to entertain the modern ear, with its new ruberick for complexity. Early on, this was done by an increased use of dissonant chords and melodies, more focus on percussive instruments, and the use of microtones and non-diatonic scales. But finally, it was the invention of the microphone that allowed sound artists to directly capture complex and beautiful sounds, instead of referencing or emulating them.

In this essay, I intend to demonstrate how music plays a large role in shaping our relationship with our surroundings by influencing our ideas about the physical world we live in, and changing our philosophical perspective of its nature.


Until recently, sound was regarded as musical if, and only if it had an instrumental source. A violin, a voice, or an African drum created 'music'. A rock thrown through a window did not. The creation of recording mediums, especially the microphone, was the most important advance in the history of music. All of the sudden, previously ephemeral auditory experiences could be recreated and relived. Music could be copied almost endlessly and heard by a vastly greater audience, since they didn't have to be present for a live performance to hear it. No longer was there the monstrous creative barrier that restricted performers and composers to pieces that had to be performed live, without the aid of edits, overdubs, multiple takes and other artifacts of post-production. Perhaps most importantly, recording introduced a completely new way of listening to, and creating music. It allowed the capturing and manipulation of any perceivable sound event, and it allowed an audience to listen to these previously 'non-musical' sounds in an environment detached from the audio's physical sources. By giving these sounds a life of their own, independent of their source, listeners were encouraged to concentrate on the complex beauty of previously overlooked sounds.


Humans have evolved to favor the sense of vision to such an extent that sound events basically receive no conscious attention, unless a sound is abnormal or out of context enough to be considered alarming. Listening to recordings of sounds makes us more familiar with the things producing them by allowing us to explore their sonic qualities in depth, without being distracted, if not consumed, by their visual characteristics.

However, when we are listening to sound through the translation of a recording medium, there are several important and interesting observations we must explore before we can say what it is, exactly, that we are becoming familiar with.

1. The microphone hears differently than the human ear. Different frequencies, or combinations of frequencies, are emphasized, while others are deemphasized or lost altogether. Perhaps the most important observation about the nature of the microphone is that, regardless of its sonic loyalty to the recorded object, it is still only representing, not reproducing, the acoustic phenomenon. It is an imperfect intermediary between source and listener, and as such, constitutes a degree of separation between sender and receiver.

2. Recorded sound is not live. It must be stored in some form before it can reach our ears. We do not hear them from the original source. They are dead sounds from the past. The source has become the recording medium, whether it be tape, ones and zeros, or a wax cylinder. Even in the case of listening to a solo electric guitar at a live performance, there are numerous degrees of removal from the instrument and the ear, each one distorting the vibrations of the source: In the most bare-bones setup, the string's vibrations are transcribed by a magnetic pickup into an electric signal, which flows through a wire into a preamp, which drastically raises the amplitude of the signal, then into an amp, which further distorts amplitude and frequency, then the signal is transcribed into back-and-forth movement by a magnetic coil, which in turn moves a speaker, which causes the air to move in the surrounding space, after which numerous things happen to the sound, depending on the dimensions of the room, materials within it, distance between listener and speaker, etc.

3. The most important thing to consider when asking the question 'What are we becoming familiar with when we listen to recorded sound?' is the step that occurs after the sound reaches our ears. In my view, there are three very general steps that occur to sound after it reaches our ears.

The first is what we literally are able to hear, given the physical limitation of the human ear. This includes all of the innumerable psychoacoustic phenomena that occur within the ear, such as our hearing range, or the 'masking effect' that renders us unable to hear the quieter of two frequencies in close proximity sounding simultaneously.

The second is the act of attention, focus, and concentration. The listening ear is constantly shifting specific attention and general concentration on both a conscious and subconscious level simultaneously, and in infinite variety. One moment, I'm concentrating on the texture of a synthesizer amidst a torrent of other sounds in my headphones, while staying attentive to the sound of cars to the left of me as I ride my bike through traffic; another moment, I'm comparing subtle percussive changes between two measures while completely tuning out the loud and spirited debate of a nearby group from my conscious attention.

The third step is one of intellectual thought. Consciously or unconsciously, as we have sensory experience, we are constantly making associations between images, sounds, memories, abstract ideas, smells, words, rules, people, etc. This is a very important and interesting step, and it is the step I will focus on throughout this paper.

I would like to define two basic types of recorded sound; representational and non-representational sound.

Representational sound is recorded with the intention of the listener being able to discern the original, physical source of the sound, and to visualize the sources that are creating the audio. The listener, for instance, is intended to recognize the sound of a guitar, trumpet, snare drum, female voice, animal sounds. A large part of the enjoyment of representational sound is an awareness of the recorded object's history, social status or function. Non-representational sound is recorded with the intention of being experienced purely as abstract sound, without any worldly reference.

To understand how we react to sounds whose source we recognize, (or are intended to), we must first quickly examine the nature of memory.

Imagine coming home to your apartment. You use no cognitive effort to get out the right key, unlock the entry door, turn down the hall, walk up the stairs, turn down the maze of hallways, get out another key, turn the knob, enter, and close and lock the door behind you. You're completely on autopilot, and to some extant, hardly even conscious of your actions. In some cases, you might not even remember doing it upon later reflection. But now imagine that someone has played a trick on you, and the door handle to your apartment is just three inches higher than usual, or that the door was suddenly just 5 pounds heavier than usual, or that there were one extra step on the way up. Any one of these minor changes would immediately make you alert, and probably more confused and dismayed than the situation calls for. This analogy, stolen from Jeff Hawkins' book, On Intelligence, illustrates a basic concept of memory: we tend to be alerted by things out of the ordinary, and these are the things that we remember. The brain's job is to constantly create and update a virtual map of the world we live in so that we can live and thrive. We devote the least attention to the people, places and things that we are most familiar with, because these things have a firm place in our long-term memory, and we can maneuver around them on autopilot. In these situations, the brain is free to devote its energy to something else, like thinking about that novel you've been kicking around. But as soon as something out of the ordinary appears in the situation, we immediately become alert, and our brain catalogues this new anomaly into its long-term memory. The more traumatic, the more deeply we remember. In this way, our brains build a repertoire of surprisingly detailed images and spacial maps.

Every one of our senses is designed to inform us of our physical surroundings by testing events against our previous experiences, our catalogue of previously encountered images. Furthermore, it is interesting to note that there is nothing fundamentally different in the way that the brain processes information from different senses. The brain literally perceives no difference between visual information and auditory information. This has been demonstrated by successfully "switching" sensory input to different parts of the brain that are normally used other senses. This means that auditory events make our brains reference its models of people, places and things just like visual, taste, or olfactory events.

In this sense, the sound of an acoustic guitar is enjoyed not simply because of its timbrel quality. Due to its widespread cultural status, everyone is, to some degree, aware of its general shape and size, and most are somewhat familiar with the process of playing the guitar, and listeners can imagine the player's fingers moving fast and swiftly. So the sound of a guitar is enjoyed largely because of the listener's awe for instrumental virtuosity, among other things. The main observation that I would like to make about representational sound is that it is enjoyed primarily for its reference to our memories, not for its beauty as an abstract sound design.

It would be a redundant truism to say that language is an example of audio that helps us to learn about the world, including the people in it and their emotions. Language's role in shaping our culture and worldview is only debated in terms of degree. I think that music, regardless of intention, has an emergent effect on the brain that is very similar to language. It is strikingly clear how much classical music, for instance, which laid down so many fundamental paths in the language of music, models the tone and cadence of conversation. And with our music drenched culture, there is no doubt of a certain degree of feedback response from music that in turn causes our conversational tone to model current musical trends, creating a sort of symbiotic relationship. Both forms of audio are arranged in a deliberate pattern that changes over time, they both arrive at increasingly less ambiguous references to emotions, locations, objects, situations, etc. as people use them in similar contexts, and their form has many parallels. I would go so far as saying that music is a language very similar to any functional example, but with one fundamental difference: Spoken language has much more strict rules of grammatical organization. Language disallows ambiguity in its references due to its practical function, whereas music, due to its artistic niche, is encouraged to be ambiguous. It is a true generality that music's domain is that of emotion, and language's domain is a perceived reality, but this is only a vague generality, and to say that the roles of the two are mutually exclusive is a false dichotomy. Music derives most of its emotional power by referencing the perceived, "empirical" world in an irrational way.

So, then, what's left for non-representational, or abstract sound? After all, if one accepts that even the most fundamental emotions we experience have their roots in the perceivable world as it is interpreted and stored by our brains, then the entire notion of "abstract" art comes into question. For the sake of the scope of this essay, it must suffice to say that abstract music is created with the intention of making either no reference, or only vague reference, to the empirical world.

I believe that music is inherently an art form that attempts to transcend the empirical world, to become part of the mystical, emotional soul, apart from the physical reality. Music attempts to become 'The Self,' perceived by its owner as a separate dimension. Music constantly changes so that it can keep its place in this ephemeral world, apart from the reality that is maintained by the practical senses, and by language. Music always has to keep up with the brain, because its function is to turn novelty into routine. I think music's history can be viewed as a series of paradigms of detachment from empirical reality, followed by mental habituation; that is, the brain categorizing these departures from reality as distinct, separate entities that become mundane and routine. In response, music finds new departures, and so on. The following is a short list of examples:


1. I think this is demonstrable as early as formal music's beginnings, when musicians were conceiving of ways to separate frequencies into modular tones as a way to transcend the random white noise of natural sounds, and early percussive instruments. Instrumental designing and manufacturing was fueled by a search for a 'pure tone', unfound in the natural world.

2. Jumping ahead to post-industrial times, the 'pure tone,' classical approach to music evoked little response from the modern ear that was now constantly being barraged by complex, mechanical sound. In response, musicians and composers developed an interest in purposefully creating sounds which are unpredictable, and to a certain extent, uncontrollable. Early on, this was done by using dissonant chords and melodies in classical composition, followed by expiremental composers who used microtones (tones inbetween the notes of the traditional 12-note chromatic scale). Others competed with the complexity of modern sound by incorporating more percussion instruments, whose waveforms are close to purely random white noise. Italian futurist, Luigi Russolo took this concept to its end in his treatise, The Art of Noises (1913), in which he describes in complexity what he envisioned in the future of music. Central to this essay are descriptions of future "noise machines" that would emulate the randomness of sounds like clanging, scraping, and shattering with precise guidance by performers.

Later, composer such as Earle Brown and John Cage explored indeterminate composition. These methods could be grouped into two main categories: There are compositional methods that are determinate as to their composition, and indeterminate as to their performance. Using this method, the composer encourages the performers to improvise, following vague guidelines. An example of this technique is Earle Brown's December 1952, a piece whose score is an array of vertical and horizontal lines with varying space and width, instructing the performer to interpret the score visually and translate the graphical information to music. And then there are compositions that are indeterminate as to their composition, and determinate as to their performance, such as John Cage's Music of Changes, a piece for piano for which all compositional choices were made by flipping coins.

3. The invention of recording mediums and the speaker allowed the detachment of real-world sounds from their sources. Pierre Schaeffer called sound that one hears from a speaker without seeing its source acousmatic sound. This term is derived from akousmatikoi, which refers to the pupils of the philosopher Pythagoras who were required to sit in absolute silence while listening to their teacher deliver his lecture from behind a veil or screen so that they could better concentrate on his teachings. This 'acousmatic' listening allowed listeners to actually listen to sounds that they had heard many times, and appreciate their complex beauty. But the speaker in this sense creates a paradox: the sound is quite literally detached from the source object, but it is this detachment that allows the ear to truly listen to the otherwise overlooked sound. So the listener is simultaneously detached and familiarized with the empirical world.

The musical movement to truly create detachment was musique concrète. This musical movement allowed any and all sound events into the musical vocabulary, to be pieced together into a sound collage. As Pierre Schaeffer describes in his writings, traditional music starts as an abstraction, musical notation on paper or other medium, which is then produced into audible music. Musique concrète, on the other hand, strives to start at the "concrete" sounds that emanate from base phenomena and abstract them into a musical composition.

4. But the recorded sounds, although detached and rearranged, still reflected their sources with near exact fidelity (see my discussion on microphones above). To detach complex sounds from their empirical sources even further, early artists who tried to accomplish this generally took two distinct approaches - repetition and manipulation.

No doubt, at some point you've repeated a word or phrase over and over again in your head, or out loud, until it seemed to lose meaning, and become absurd. You repeated it until it began to look at it for what it was: a series of phonemes (the small, independent sounds that are combined to form individual words). This is what early repetitive minimalists tried to achieve in their music. This is detachment through familiarization. Terry Riley achieved this with the sound of violins by performing languid, long strokes on one note in a violin, and later adding first and second harmonics. But I think the best example of this type of repetition is in Steve Reich's "Come Out". The piece starts out with a recorded phrase from a civil rights riot survivor, "to let the bruise blood come out to show them." Reich rerecorded the fragment "come out to show them" on two channels, which are initially played in unison. Gradually, they slip out of sync, and the discrepancy widens and becomes a reverberation. The two voices then split into four, looped continuously, then eight, and continues splitting until the actual words are unintelligible, leaving the listener with only the speech's rhythmic and tonal patterns. This is a 13-minute piece, and what is most interesting to me is that if one listens closely, one hears sounds, words, rhythms, and even entire phrases that are nowhere to be found in the initial recording. The brain always tries to find patterns within stimuli, even if the stimuli are vague and random. This mental phenomenon is called pareidolia. Other examples include hearing messages in records played in reverse, and seeing the face of Jesus in sheet metal.

Manipulation (obviously) is a method that uses technology to manipulate sounds. We become mentally habituated to certain types of manipulations when we hear them over and over again from different sources. We have a firm baseline of what types of sound we should expect from the naturally occurring world, and any departure from that is immediately noticed by our brains. In this sense, the departures from real world sounds become categorized just as individual sounds do. For instance, the difference in sound between a large and small door of the same material slamming has a drastically different sound signature than that of a drastic technological pitch manipulation on a slamming sound. If you've ever played with a Casio keyboard that has a quick record function, and played your burp sounds on different keys, you'll know exactly the difference I'm referring to. I believe that technological manipulation makes us more familiar with the medium used to contort the sound. As I wrote earlier, our brains have a baseline of what sounds normal, and when confronted with variations on this normality, what our brains record are the differences in the sound. One experiment that you can try at home to demonstrate this to yourself is to walk slowly towards a wall with your eyes closed while repeating any word over and over again with the same volume and intonation. You'll be able to tell when you're about to hit the wall because your ears are picking up subtle differences in the amount of time it takes between the original spoken word and its echos. Or think about what a speaking person sounds like through a wall, or what a voice sounds like in a cement room versus a carpeted room. When we hear a person speaking in an echoing cement room, we can imagine what that voice would sound like without echoing and reverberation. These are all examples of our baseline of natural sound. In this sense, when we listen to recorded music, we are becoming familiar with the sound of a microphone, and when we listen to a voice being scratched on a turntable, we are becoming familiar with the sound of a turntable.

5. Electronic music manufacturing technologies, such as synthesizers, which create sounds from the bottom up from pure electronic signals, are an attempt to create sound with no relation whatsoever to any previously heard sounds. This is the ultimate detachment from reality. But again, we see the same habituation in this case. When the public heard the first Buchla synthesizers, or the first Theremins, it truly was a novel, totally synthetic and unworldly sound, completely detached from previous auditory experience. Shortly after, however, with the creation of more and more synthesizers with the same fundamental sounds, they began to be immediately identified and categorized as synthesizers, just like pianos, guitars and trumpets.

Herein lies a core point of this essay: We develop our opinions about the world based on the nature of our experiences with it. When stimuli are presented to us in a negative environment, or if they have a direct negative effect on us, then what do you know, we develop a negative opinion about them, and we treat them thereafter with distrust or hostility. Likewise for positive stimuli. I believe that this subconscious familiarization with technology that we all experience when we hear modern music makes us come to view technology in general as a creative, and potentially positive force that we are in control of, and that we can and will use technology in an actively creative and constructive way.

Of course, with this observation I may be tipping my hand a bit to reveal a personal sacred cow. I think that technology is an increasingly integral part of our world, and that in the near future, it will become progressively more of a universal foundation on our planet. I think that this is an inevitability, and that we have a choice as to how we think of technology in general. We can look at it as a negative, poisonous, 'unnatural' infection on our pure, unwitting planet, in which case this is exactly what our technological progression will be... Or, as I noted above, we can view technology as a positive, constructive force that will reflect our innate creativity. Music is, in my view, the main proponent towards the latter viewpoint, and stands alongside every other art form that embraces technology in a creative way to ensure us that technology is not an evil Frankenstein, or HAL 9000 out on the loose and out of control.

6. The current trend in music to keep its place as a sort of abstract, emotional, language in a separate dimension away from the perceptive universe is to combine instruments, melodic architecture, found sound, words, digital glitches, musical references, multiple cultures, movie clips, pop culture references, etc. in a brain-bending, hodgepodge collage that derives its intensity by making as many connections in the brain as possible. This approach is especially rewarding because it effectively triggers the creative mind by making literal, physical connections between various referential points in the brain's memory storage, resulting in inobvious connections between different things, concepts, activities, cultures, words, etc., which is the basis of novel thought.

This approach to music has invaluable influence on our philosophical worldview. By combining bits of audio from every imaginable source, be they 'natural' or 'unnatural' sounds, human, plant, animal or object sounds from any culture, and synthesizing them into a cohesive whole in which no one source has any qualitative weight over any other, this wealth of random connections within our brains between all of the various symbols in our memory encourages us to transcend our pragmatic, qualitative, language-based approach to the universe and approach a worldview that's closer to the fundamental reality of the oneness and unity of everything in the universe.

No comments: