User:Tgraban/sandbox

trying out the sandbox as a test space for article on Multimodality

The idea of applying multimodality to literacy first emerged during the 1960s, as part of the beliefs behind expressivist theories about language. Expressivist ways of thinking encouraged writers to find their voice outside of language, by placing it in a visual, oral, spatial, or temporal medium.1 One theorist, Donald Murray, instructed his writing students to “see themselves as cameras,” by literally writing down every visual observation they made for one hour. Expressivists emphasized personal growth, and linked the task of writing with all visual arts by calling both composition.2 Also, by making writing the result of a sensory experience, expressivists defined writing as a multisensory experience, and asked for it to have the freedom to be composed across all modes – tailored for all five senses.

Multimodality was also borrowed from cognitive research about writing which occurred during the 1970s and 1980s. This research, shaped by James Berlin, Lisa Ede, and Joseph Harris had a cognitive or thought based approach, borrowed from psychology. Berlin declared that the process of composing writing could be directly compared to that of designing images and sound. Further, the research of Joseph Harris points out alphabetic writing is the result of multimodal cognition. Writers often conceptualize their work by nonalphabetic means: visual imagery, music, and kinesthetic feelings.3 Harris believed composition spaces can and should reflect these concepts. This idea is more commonly known as the neuro-linguistic learning styles, developed by Neil Fleming the 1970s. The book Talking, Sketching, and Moving, published by Patricia Dunn in 2001, argued to let students use these visual, auditory, and kinesthetic learning styles to make stronger compositions. Dunn states, “We should be teaching students to create multimodal texts that can be accessible and persuasive to cognitively diverse audiences.”4

Given the release of the personal computer in the 1990s, the Internet, and digital technology, ideas about literacy for this millennial generation have drastically changed. The new literacy of this new media age involves text which is circulated in short bursts, informally, and across multiple modes. Students are arriving to classrooms with this new form of literacy. In the elementary level of education, children learn concepts through various mediums, including picture books and films. Both of these methods represent two forms of multimedia. The books combine pictures with text to convey meaning, while the films utilize moving images as well as audio. In advanced schooling, multimodality continues to inform students of theories of education. This progression represents a shift in emphasis on multiple forms of text, as opposed to the traditional print methods previously used. Through this process, literacy in multimodality evolves as students advance in their studies, which allows for more sources to be used in learning. Furthermore, multimodality in education involves more concepts in place at one time, which can cause learning to occur at a faster rate. Thus, multimodality is a primary factor in the rapid advancement in theories of education.

In its current use for Internet and network-based composition, the term multimodality has become even more prevalent, applying to various forms of text such as fine art, literature, social media and advertising. The monomodality, or singular mode, which used to define novels, academic treatises, and some corporate documents, is now being replaced with more complex layouts. “Nowadays… text is just one strand in a complex presentational form that seamlessly incorporates visual aspect ‘around,’ and sometimes even instead of, the text itself.”1 These visual aspects often include color, images, video, even sound and music.3 Due to its prevalence, multimodality is quickly becoming “the normal state of human communication.”2

Current social media sites such as Blogger and Tumblr encourage multimodality by allowing for several different formats to be uploaded. Even Facebook promotes pictures, videos, and music, as well as the written text of posts. One of the most prevalent examples of multimodality today is a website. Most websites on the World Wide Web include text, colors, images, and sound, and many have videos as well.3

Multimodality is the use of several modes in a single artifact. These modes can vary from text and images to diagrams and sound. Multimodality occurs in literature, art, social media, and advertising, among other places. It is especially obvious in writing on blogs. Current blogging sites such as Blogger allow users to write text, change the colors of their text, upload images and videos, and insert sound and music. However, a digital presentation is not the only place to encounter multimodality. The activity of combining sketches with color on a print page is a similar, if less complex, mixture of modal tools.

1“But things have changed: nowadays that text is just one strand in a complex presentational form that seamlessly incorporates visual aspect ‘around,’ and sometimes even instead of, the text itself. We refer to all these diverse visual aspects as modes of information presentation.” (1) Bateman, John A. Multimodality and Genre: A Foundation for the Systematic Analysis of Multimodal Documents. New York: Palgrave MacMillan, 2008. 1. Print.

2“And that… is the argument for taking ‘multimodality’ as the normal state of human communication.” (1) Kress, Gunther. Multimodality: A Social Semiotic Approach to Contemporary Communication. New York: Routledge, 2010. 1. Print.

3“These sites use language as well as pictures, and many employ sound and music as well.” (252) Duncum, Paul. "Visual Culture Isn't Just Visual: Multiliteracy, Multimodality and Meaning." Studies in Art Education. 45.3 (2004): 252-264. Print.