Human–animal communication

Human–animal communication is the communication observed between humans and other animals, ranging from non-verbal cues and vocalizations to the use of language.

Some human–animal communication may be observed in casual circumstances, such as the interactions between pets and their owners, which can reflect a form of spoken, while not necessarily verbal dialogue. A dog being scolded is able to grasp the message by interpreting cues such as the owner's stance, tone of voice, and body language. This communication is two-way, as owners can learn to discern the subtle differences between barks or meows, and there is a clear difference between the bark of an angry dog defending its home and the happy bark of the same animal while playing. Communication (often nonverbal) is also significant in equestrian activities such as dressage.

One scientific study has found that 30 bird species and 29 mammal species share the same pattern of pitch and speed in basic messages. Therefore, humans and those 59 species can understand each other when they express "aggression, hostility, appeasement, approachability, submission and fear."

Birds
Parrots are able to use words meaningfully in linguistic tasks. For example, a grey parrot named Alex learned one hundred words, and after training used English words to answer questions about color, shapes, size and numbers correctly about 80% of the time. He also wanted to try to go without training; said where he wanted to be taken, such as his cage or the back of a chair; and protested when taken elsewhere or when hidden objects were not where he thought they were. He asked what color he was, which has been called the only question so far asked by a non-human animal. Scientific American editor Madhusree Mukerjee described these abilities as creativity and reasoning comparable to nonhuman primates or cetaceans, while expressing concern that extensive language use resulted in feather-plucking behavior, a possible sign of stress.

Most bird species have at least six calls which humans can learn to understand, for situations including danger, distress, hunger, and the presence of food.

Pigeons can identify different artists. Pigeons can learn to recognize up to 58 four-letter English words, with an average of 43, though they were not taught any meanings to associate with the words.

Java sparrows chose music by sitting on a particular perch, which determined which music was played. Two birds preferred Bach and Vivaldi over Schoenberg or silence. The other two birds had varying preferences among Bach, Schoenberg, white noise and silence.

The greater honeyguide has a specific call to alert humans that it can lead them to honey, and also responds to a specific human call requesting such a lead. By leading humans to honeybee hives, it can eat the discarded honeycomb wax after the honey is collected. The human call varies regionally, so the honeyguide's response is learned in each area, not instinctive.

Crows identify and respond differently to different human faces and can be trained to understand and reply to verbal commands.

Fictional portrayals of sentient talking parrots and similar birds are common in children's fiction, such as the talking, loud-mouth parrot Iago of Disney's Aladdin.

Primates
Chimpanzees can make at least 32 sounds with distinct meanings for humans.

Chimpanzees, gorillas and orangutans have used sign language, physical tokens, keyboards and touch screens to communicate with humans in numerous research studies. The research showed that they understood multiple signals and produced them to communicate with humans. There is a broad consensus among linguists that sign language experiments with great apes have not produced any evidence of true linguistic ability, and high profile examples such as Koko have faced intense criticism, despite public perceptions of the experiments.

Baboons can learn to recognize an average of 139 four-letter English words (maximum of 308), though they were not taught any meanings to associate with the words.

Primates have also been trained to use touch screens to tell a researcher their musical preferences. In Toronto, for hundreds of songs in random order, orangutans were given one 30-second segment of a song, and then chose between repeating that segment or 30 seconds of silence. Different orangutans chose to replay from 8% to 48% of the segments, and all exhibited stress throughout the trials. There was no pattern of selections by genre, and the researchers did not look for other attributes that were shared by the orangutans' chosen segments. No comparison was available as to how many 30-second segments humans would repeat in the same situation. In another experiment, the orangutans did not distinguish between music played in its original order and music sliced into half-second intervals, which were played in random order. Chimpanzees can hear higher frequencies than humans. If orangutans can too, and if these overtones are present in the recordings, the overtones would affect their choices.

Lilly experiments
In the 1960s, neuroscientist John C. Lilly sponsored English lessons for one bottlenose dolphin (Tursiops truncatus). The teacher, Margaret Howe Lovatt, lived with the dolphin for $2 1/2$ months in a house on the shore of the Virgin Islands. The house was partially flooded and allowed them to be together for meals, play, language lessons, and sleep. Lilly thought of this as a mother-child dyad, though the dolphin was five to six years old. Lilly said that he had heard other dolphins repeating his own English words, and believed that an intelligent animal would want to mimic the language of its captors, to communicate. The experiment ended in the third month and did not restart, because Howe found the two-room lab and constant bumping from the dolphin too constricting.

After several weeks, a concerted effort by the dolphin to imitate the instructor's speech was evident, and human-like sounds were apparent, and recorded. It was able to perform tasks such as retrieval of aurally indicated objects without fail. Later in the project the dolphin's ability to process linguistic syntax was made apparent, in that it could distinguish between commands such as "Bring the ball to the doll," and "Bring the doll to the ball." This ability not only demonstrates the bottlenose dolphin's grasp of basic grammar, but also implies the dolphins' own language might include syntactical rules. The correlation between length and 'syllables' (bursts of the dolphin's sound) with the instructor's speech also went from essentially zero at the beginning of the session to almost a perfect correlation by its completion. So that when the human spoke five or ten syllables, the dolphin also spoke five or ten 'syllables' or bursts of sound.

Two experiments of this sort are explained in detail in Lilly's book, Mind of the Dolphin. The first experiment was more of a test run to check psychological and other strains on the human and cetacean participants. Its goal was to determine the extent of the need for other human contact, dry clothing, time alone, and so on. Despite tensions after several weeks, Howe Lovatt agreed to the $2 1/2$ months isolated with the dolphin.

Herman experiments
Experiments by the research team of Louis Herman, a former collaborator and student of Lilly's, demonstrated that dolphins could understand human communication in whistles and respond with the same whistles.

A female bottlenose dolphin, Phoenix, understood at least 34 whistles. Whistles created a system of two-way communication. By having separate whistles for object and action, Herman could reorder commands without fresh teaching (take hoop to ball). Successful communication was shown when Herman used new combinations, and the dolphin understood and did what he asked without further training 80–90% of the time.

In 1980, Herman had taught six whistles to a female bottle-nose dolphin, Kea, to refer to three objects and three actions, and the dolphin followed his instructions. He wrote, "In addition to mouthing the three familiar training objects in the presence of the mouth name, Kea correctly mouthed on their first appearance a plastic water pipe, a wooden disc, and the experimenter's open hand. The same type of immediate response generalization occurred for touch and fetch."

Richards, Wolz and Herman (1984) trained a dolphin to make distinct whistles for objects, "so that, in effect, the dolphin gave unique vocal labels to those objects."

Herman's later publications do not discuss the whistle communication. Herman started getting US Navy funding in 1985, so further expansion of the two-way whistle language would have been in the classified United States Navy Marine Mammal Program, a black project.

Herman also studied the crossmodal perceptual ability of dolphins. Dolphins typically perceive their environment through sound waves generated in the melon of their skulls, through a process known as echolocation (similar to that seen in bats, though the mechanism of production is different). The dolphin's eyesight however is also fairly good, even by human standards. Herman's research found that any object, even of complex and arbitrary shape, identified either by sight or sound by the dolphin, could later be correctly identified by the dolphin with the alternate sense modality with almost 100 per cent accuracy, in what is classically known in psychology and behaviorism as a match-to-sample test. The only errors noted were presumed to have been a misunderstanding of the task during the first few trials, and not an inability of the dolphin's perceptual apparatus. This capacity is strong evidence for abstract and conceptual thought in the dolphin's brain, wherein an idea of the object is stored and understood not merely by its sensory properties; such abstraction may be argued to be of the same kind as complex language, mathematics, and art, and implies a potentially very great intelligence and conceptual understanding within the brains of tursiops and possibly many other cetaceans. Accordingly, Lilly's interest later shifted to whale song and the possibility of high intelligence in the brains of large whales, and Louis Herman's research at the now misnomered Dolphin Institute in Honolulu, Hawaii, focuses exclusively on the Humpback whale.

Other researchers

 * Batteau (1964, video) developed machines for the US Navy, which translated human voices to higher frequencies for dolphins to hear and translated dolphin voices to lower frequencies for humans to hear. The work continued at least until 1967 when the Navy classified its dolphin research. Batteau died, also in 1967, before he published results.
 * Soviet icebreaker (1985) experimented with different kinds of music to signal to 1,000 to 3,000 belugas that they could follow the ship to open water. They responded to classical music.
 * Reiss and McCowan (1993) taught dolphins three whistles (ball, ring, rub), which the two dolphins produced, and even combined, when playing with the ball and/or ring, or getting a rub.
 * Delfour and Marten (2005) gave dolphins a touchscreen to show they recognized a musical note
 * Kuczaj (2006) used an underwater keyboard, which humans and dolphins can touch to signal an action.
 * Amundin et al. (2008) had dolphins point narrow echolocation beams onto an array of hydrophones which acted like a touchscreen to communicate with the researchers (video)
 * Reiss (2011) used an underwater keyboard which dolphins could press. A dolphin defined a key as "I want a small fish".
 * Herzing (2013) used an underwater keyboard in the open ocean which dolphins and humans could press to choose a plaything.
 * Herzing (2014) created 3 whistles for "play objects (Sargassum... scarf, and rope)", and found that wild dolphins understand them, but has not found if dolphins produce the whistles.

Historical
From Roman times to modern Brazil, dolphins have been known to drive fish toward fishermen waiting on shore, and signal to the fishermen when to throw their nets, even when water is too murky for the fishermen to see the arrival of the fish. The dolphins catch unnetted fish disoriented by the net.

From about 1840 to 1920, orcas smacked the water off Twofold Bay in New South Wales to signal to human whalers that the orcas were herding large baleen whales nearby, so the humans would send boats to harpoon the whales, killing them faster and more assuredly than the orcas could. The orcas ate the tongues and lips, leaving the blubber and bones for the humans.

Origins of communication with canines
It has been widely theorized that human-animal communication began with the domestication of dogs. Humans began communicating with wolves before the end of the late pleistocene, and the two species eventually created a wide scale symbiotic relationship with one another. Modern biologists and anthropologists theorize that humans and wolves met near hunting grounds, and as the Homo sapiens diet began relying more and more on meat for development, they would often encounter and compete with wolves.



Humans' relationship with wolves garnered a mutual benefit, obtaining food and protection. Humans likely began attempting to cooperate with wolves through commands, which eventually led to a more familiar species of dogs that we know today. These commands were likely the first instances of obedience training upon canines, as dogs maintained a pack mentality that humans fit into as a pack member.

Neolithic humans developed, seemingly unintentionally, a system of artificial selection with both livestock and animal companions, ushering in a widespread sustenance based foundation of humans communicating with animals. New theories within academic discussion of scientific data refer to this as both prezygotic and postzygotic “strong” artificial selection. Humans began controlling the offspring of livestock during the agricultural revolution through the mating of high yielding animals.

Theories from anthropologists suggest that humans began differing relationships with canines during the neolithic age. This is possibly when humans began keeping dogs as pets, creating a new form of communication with domesticated animals, pet talk.

Dogs communicating to humans
In a Scientific American article from May 1884, John Lubbock described experiments teaching a dog to read text commands on cardboard cards.

Bonnie Bergin trained dogs to go to specific text on the wall to ask clearly for "water, treat or pet me." Dogs were able to learn English or Japanese text. She says service dogs can learn to find EXIT signs, bathroom gender signs, and report what disease they smell in a urine sample by going to a sign on the wall naming that disease.

Police and private dogs can be trained to "alert" when they find certain scents, including drugs, explosives, mines, scent of a suspect, fire accelerants, and bed bugs. The alert can be a specific bark or position, and can be accepted as evidence in court.

Stanley Coren identifies 56 signals which untrained dogs make and people can understand, including ten barks, five growls, eight other vocalizations, 11 tail signals, five ear and eye positions, five mouth signals and 12 body positions. Faragó et al. describe research that humans can accurately categorize barks from unseen dogs as aggressive, playful, or stressed, even if they do not own a dog. This recognizability has led to machine learning algorithms to categorize barks, and commercial products and apps such as BowLingual.

Humans communicating to dogs
Dogs can be trained to understand hundreds of spoken words, including Chaser (1,022 words), Betsy (340 words), Rico (200 words), and others. They can react appropriately when a human uses verbs and nouns in new combinations, such as "fetch ball" or "paw frisbee."

Canine researcher Bonnie Bergin trained dogs to obey 20 written commands on flashcards, in Roman or Japanese characters, including 🚫 to keep them away from an area.

Shepherds and others have developed detailed commands to tell herding dogs when to move, stop, collect or separate herd animals.

Mutual communication
Claims of interspecies communication between dogs and humans, with the use of sound buttons, have prompted researchers at the University of California, at San Diego to begin an ongoing research effort (as of June 2021) into potential canine linguistic capabilities.

Felines
Human-Feline communication is dated to at least 9500–10000 B.C. according to archeological evidence from the Neolithic village of Shillourokambos on the Mediterranean island of Cyprus. Human and cat remains were found buried together along with ceremonial seashells, polished stones, and other decorative artifacts. This burial between a human and feline companion suggests that the two species had begun building a relationship with one another. Feline companions began with the establishment of organized wide-scale agriculture, as humans needed a way to exterminate vermin which inhabited food stores.

Evidence of the regular domestication of felines started around 5000 B.C. in Ancient Egypt with cats becoming a tool which humans kept near food surpluses as agriculture became more widespread and regulated. Cats are known to possess a commensal relationship with humans, and are treated as regular housepets. Modern felines often perform no real duties and are housetrained. Human owners communicate with these felines through pet talk, yet there is little to no evidence that felines can understand humans or are capable of consistent training, most cases are individual and replication can be very difficult.

Other animal training
Humans teach animals specific responses for specific conditions or stimuli. Training may be for purposes such as companionship, detection, protection, research and entertainment. During training humans communicate their wishes with positive or negative reinforcement. After training is finished the human communicates by giving signals with words, whistles, gestures, body language, etc.

APOPO has trained Southern giant pouched rats to communicate to humans the presence of land mines, by scratching the ground, and tuberculosis in medical samples. They identify 40% more cases of tuberculosis than clinics do, an extra 12,000 cases from 2007 to 2017. They have identified 100,000 mines from 2003 to 2017, certifying 2,200 ha as mine-free. They are accurate enough that the human trainers run on the land after removing the mines which rats have identified.

Rats (Wistar, Rattus norvegicus) have been taught to distinguish and respond differently to different human faces.

Patricia McConnell found that handlers around the world, speaking 16 languages, working with camels, dogs, donkeys, horses and water buffalo, all use long sounds with a steady pitch to tell animals to go more slowly (whoa, euuuuuu), and they use short repeated sounds, often rising in pitch, to speed them up or bring them to the handler (Go, Go, Go, claps, clicks). Chimpanzees, dogs, gulls, horses, rats, roosters, sheep and sparrows all use similar short repeated sounds to tell others of the same species to come closer.

Even fish, which lack a neocortex, have been taught to distinguish and respond differently to different human faces (archerfish ) or styles of music (goldfish and koi ).

Molluscs, with totally different brain designs, have been taught to distinguish and respond to symbols (cuttlefish and octopus ), and have been taught that food behind a clear barrier cannot be eaten (squid ).

A harbor seal, Hoover learned to speak several phrases in understandable English as a pup from his human foster parent and used these in appropriate circumstances during his later life at the New England Aquarium until he died in 1985. Other talking animals have been studied, though they did not always use their phrases in meaningful contexts.

Animal communication as entertainment
Though animal communication has always been a topic of public comment and attention, for a period in history it surpassed this and became sensational popular entertainment. From the late 18th century through the mid 19th century, a succession of "learned pigs" and various other animals were displayed to the public in for-profit performances, boasting the ability to communicate with their owners (often in more than one language), write, solve math problems, and the like. One poster, dated 1817, shows a group of "Java sparrows" who are advertised as knowing seven languages, including Chinese and Russian.