User:Mjbama19/sandbox

= Judith Holler =

Dr. Judith Holler is a Research Group Leader in the field of Human Communication in Social Interaction. She began her research career at Manchester University in September of 2000 and is currently employed at the Max Planck Institute for Psycholinguistics in Radboud University, which is located in Nijmegen, Netherlands. Her main focuses of study include pragmatics and semantics of human language and in-person interaction. Many of the studies she is involved in have to do with psycholinguistics and how signals such as hand gestures and eye gaze have an effect on human communication. Much of her work incorporates nonverbal communication and verbal communication together in order to discover new ways to decode gestures, eye gaze, and other body movements in Human Communication. The recent research she has been a part of has solely focused on the correlation between body movements and age.

Her Twitter and ResearchGate accounts provide more insight into her career as a Research Group Leader. She is a very notable researcher in the field of Communication Studies, as her work has been cited over 1,000 times. Additionally, she expresses her interests of intrapersonal communication on her ResearchGate home page. She was recently awarded the European Research Council consolidator grant in which she is using to fund an ongoing research project called CoAct.

Multi-modal Communication
Holler and Stephen C. Levinson created a term called multimodal language to encompass the various aspects of visible and vocal signals used within Human Communication. This study intended to create a better understanding around the layers of Human Communication. First, the study set out to find a problem within the exchange of face-to-face communication between a sender and a receiver. The issue found was many body movements are performed by the sender while they are attempting to convey a message, however not all of these movements were intended to be a part of the message they are sending. This in turn forces the receiver to decipher between what movements are relevant to the message or not. Once the receiver decides which of the movements were intended for the message, they need to pair those movements with the spoken language. For example, if someone asks you to come over to them and motions their hand towards their own body. Holler and Levinson called this issue a binding issue.

The study intended to seek a solution to the binding issue. Methodology used for this study was to focus on the combination of anticipated sounds and to implement constant repetition of those sounds. The observations in this research project point to the notion of multimodal indication concerning a group of intricate processes missing from the most common ways of human language processing. The evidence of this experiment suggest a revise of the way people process language processing in a way free of unnecessary distractions. Since this study was completed, new evidence in the way we think about communicating with each other and the way we decode messages within a conversation have emerged. This study has proven previous research reluctant to explore this side of communicative exchange.

Eye Gaze and Conversations
Dr. Holler conducted a study alongside Kobin H. Kendrick in 2018 about the correlation between which direction your eye gaze goes and a prediction of how your response to a conversation will be. While not explicitly stated in the abstract, this study could easily have collected new data about detecting possible instances of deception in a conversation. This experiment used quantitative research to observe this topic. For the purposes of this experiment, gaze continuation and gaze aversion were used as the two options in which the participant could choose. In this study, gaze continuation towards the speaker represented connection and gaze aversion represented disconnection from the conversation. The purpose of this study was to determine if eye gaze could predict communicative tendencies.

The procedure used to measure this observation was to analyze seven 20 minute conversations using the Eye-tracking in Multimodal Interaction Corpus (EMIC). Participants in this experiment were equipped with eye-tracking glasses and a microphone for recording their speech for a later comparison. The participants were mainly asked a series of yes or no questions, in order to ensure the data was as objective as possible. Finally, the results of this study concluded there was an efficient parallel between the eye gaze direction and the communicative tendencies of each of the participants. Overall, the data found in this study opened new paths for research regarding eye gaze and communication.

Co-speech gestures and Parkinson's disease
In her recent works, Dr. Holler has focused her research on determining how co-speech gestures display the impacts of Parkinson's disease on a person’s ability to communicate with others. Dr. Judith Holler and associates define "co-speech gestures" as a type of language-related behaviors that allude to the language they are trying to convey. This study aimed to determine whether or not gestures depend on the engagement from undamaged sensory systems. This research features new data of how people with Parkinson's disease communicate and provides insights on how to understand this form of communication in a clearer way. The study was first conducted in 2016 but was examined again in 2020 with the focus on the correlation between body movement, motor skills, hand gestures, and sensory systems in the brain.

This study used a focus group of 37 Parkinson's patients. The methodology for this study was to split the participants into two groups and then they were asked to look at a cartoon images of high motion and low motions actions. Next, they were asked to describe what was happening in the images as much as is achievable. Many of the participants were able to depict what happened in the high motion image with more gestures, but the same could not be said about the low motion image because this required performing less gestures. The findings of this study suggested co-speech gestures are not dependent on undamaged sensory systems, however it may be difficult for a person with Parkinson's disease to demonstrate actions that include a higher level of body movement. This study also proves that previous studies disregarded to decode this form of communication.