El lenguaje de los delfines

El lenguaje de los delfines, representado de forma visual

Ilustración del artículo

La gente siempre se ha sentido intrigada por los sonidos que los delfines emiten bajo el agua, y ahora un equipo de investigadores británicos y estadounidenses ha arrojado algo de luz sobre este fenómeno al crear las primeras imágenes en alta definición de estos extraños y a la vez maravillosos sonidos. Esta hazaña la consiguieron gracias a un CymaScope, un novedoso instrumento que muestra los complejos patrones que encierran los sonidos.

El equipo científico empleó grabaciones de audio de gran calidad de estos mamíferos marinos y pudo estudiar la estructura gráfica de los sonidos. Las representaciones gráficas de los sonidos se denominan «CymaGlyphs», según informaron los coordinadores del proyecto, John Stuart Reid, ingeniero acústico británico, y Jack Kassewitz, científico estadounidense especialista en delfines. Los CymaGlyphs, señalaron, constituyen la base del léxico del que se compone el lenguaje de los delfines, dado que cada «palabra» queda representada por un patrón de imagen distinto.

Con el paso de los años la mayoría de investigadores ha suscrito la idea de que, si bien ciertos sonidos emitidos por los delfines representan un lenguaje, la complejidad de los mismos dificulta su análisis. En intentos anteriores por representar gráficamente los sonidos emitidos por cetáceos como las ballenas y los delfines apenas se consiguieron gráficos simples que mostraban sólo la frecuencia y la amplitud.

En cambio, el CymaScope capta las verdaderas vibraciones acústicas que «se imprimen» en el entorno natural del delfín, indicaron los investigadores, lo cual ha hecho posible plasmar visualmente las complejas características de los sonidos producidos por los delfines, un logro sin precedentes.

Los cetólogos han postulado que los delfines han «desarrollado la capacidad de traducir la información dimensional que captan con su haz sónico de ecolocalización». El CymaScope revela las estructuras dimensionales que encierra cada sonido, explicaron los responsables.

El Sr. Kassewitz, quien también dirige el proyecto SpeakDolphin.com en Florida (Estados Unidos), explicó: «Existen poderosos indicios de que los delfines son capaces de “ver” mediante el sonido, de forma muy similar a como los humanos nos valemos de las ecografías para ver a un feto en el útero de su madre. El CymaScope nos permite vislumbrar por primera vez eso que pueden “ver” los delfines gracias a los sonidos que emiten.»

Por su parte, el Sr. Reid explicó que el sonido viaja en haces y burbujas holográficas expansivas, y no en forma de onda, como casi todos creían. «Cada vez que estos haces o burbujas de sonido se encuentran con una membrana, las vibraciones acústicas se imprimen en su superficie dando lugar a un CymaGlyph, un patrón reproducible de energía», indicó el científico. «El CymaScope utiliza la tensión superficial del agua a modo de membrana porque el agua reacciona con rapidez y puede revelar las intrincadas estructuras que hay en la forma del sonido. Se trata de detalles diminutos que se pueden captar con una cámara.»

Según sus indagaciones, los patrones de los CymaGlyph son similares a lo que los delfines perciben mediante sus propios haces acústicos rebotados y los haces de otros delfines.

«La técnica que empleamos se asemeja a la empleada para descifrar jeroglíficos egipcios», admitió el Sr. Reid. «Jean-François Champollion y Thomas Young usaron la Piedra de Rosetta para desentrañar los elementos clave del texto básico que permitió descifrar el lenguaje egipcio», añadió. «Los CymaGlyphs generados por el CymaScope son comparables a los jeroglíficos de la Piedra de Rosetta. Ahora que podemos convertir los silbidos, gorjeos y series de chasquidos en CymaGlyphs, disponemos de una poderosa herramienta para descifrar su significado.»

El Sr. Kassewitz informó que, partiendo de este innovador adelanto, pretenden llevar a cabo una serie de experimentos para grabar los sonidos emitidos por los delfines ante diversos tipos de objetos. «Los delfines son capaces de producir sonidos complejos que están muy por encima del umbral de audición del ser humano», recordó. «Pero los recientes avances en la tecnología de grabación de sonido de alta frecuencia nos permiten captar los sonidos de los delfines con más detalle que nunca.»

Para obtener más información, consulte:

CymaScope:
http://www.cymascope.com

SpeakDolphin:
http://www.speakdolphin.com

Fuente: http://cordis.europa.eu/fetch?CALLER=NEWSLINK_ES_C&RCN=30407&ACTION=D  [Fecha: 2009-01-29]

Charlas diplomáticas entre delfines para evitar agresiones físicas

Madres y crías se comunican mediante silbidosLos delfines lanzan avisos acústicos para evitar agresiones físicas

  • Utilizan sonidos pulsátiles para advertir al resto y evitar enfrentamientos
  • Es el trabajo más completo y detallado sobre la comunicación de los delfines
  • Un científico español, Bruno Díaz, es el autor principal del estudio

DAVID SIERRA 07.06.2010 Los delfines ‘dialogan con diplomacia’ para evitar agresiones físicas entre diferentes ejemplares. Emiten unos sonidos muy particulares, desconocidos hasta ahora, con los que establecen su jerarquía en el grupo. Son sonidos pulsátiles -burst pulsed sounds-, completamente diferentes a los silbidos que usan para mantenerse en contacto entre ellos.

Es una de las principales conclusiones presentadas en una investigación liderada por el científico español Bruno Díaz. Es el estudio europeo más completo sobre el repertorio de sonidos en la comunicación de estos animales. Se ha realizado en las costas del nordeste de Cerdeña (Italia), donde se encuentra el Instituto italiano de Investigación de Animales Marinos (BDRI).

El estudio, recogido por la plataforma SINC y publicado por la editorial Nova Science Publishers, se ha llevado a cabo con una población de delfines en libertad, residentes de Costa Esmeralda (Italia). Durante casi cinco años se ha podido describir y asociar los sonidos que emiten los delfines con diferentes comportamientos.

Bruno Díaz, impulsor de la investigación y director del BDRI, ha precisado, en declaraciones a RTVE.es, que los delfines “emiten silbidos para mantener el contacto entre ejemplares, pero hay otro tipo de sonidos, ráfagas pulsátiles, que se utilizan para establecer un rango jerárquico entre los individuos del grupo”.

En diferentes situaciones, asegura Díaz, “existen altos niveles de competencia, sobre todo en la alimentación, pero los delfines emiten estos sonidos para evitar una agresión física; dialogan con diplomacia sin tener que gastar energía“.

Otra de las particularidades de estos sonidos es que son selectivos, a diferencia de los humanos. Si un grupo de delfines compite por una misma pieza de comida, “el animal más dominante lanza este mensaje a los individuos del grupo que él quiera, sin que el resto de ejemplares se den por aludidos”, explica Díaz.

En los cinco años que han durado las investigaciones, “nunca ha habido contacto físico entre ellos cuando se lanzaban estas señales. Las advertencias funcionan, porque incluso cuando había alimentos, nunca hubo una sola agresión física“, concluye Díaz.

Ver video

Fuente: http://www.rtve.es/noticias/20100607/charlas-diplomaticas-entre-delfines-para-evitar-agresiones-fisicas/334514.shtml

http://www.rtve.es/alacarta/videos/ciencia-y-tecnologia/sonidos-pulsatiles-los-delfines-para-evitar-agresiones-fisicas/793403/

Los delfines ‘hablan’ como los humanos

Los delfines 'hablan' como los humanos

  • Un estudio analiza los sonidos que emiten los delfines
  • Sus ‘silbidos’ se producen con el mismo proceso que el habla humana
  • Los investigadores creen que todos los cetáceos dentados usan este sistema
Un estudio sugiere que los delfines producen sonidos de la misma manera que lo producen los humanos cuando hablan.

Los delfines pueden hablar entre ellos a través de sus ‘silbidos’ de la misma manera que los humanos nos comunicamos entre nosotros, según un reciente estudio.

La investigación se centra en los silbidos que producen los delfines. Los científicos creen que estos sonidos son  una manera de comunicación entre los delfines.

Para llegar a esta conclusión, los investigadores analizaron los sonidos de un delfín de nariz de botella macho de 12 años al que hicieron respirar una mezcla de helio y oxígeno en una proporción de 80%-20%, que en los humanos provoca que aumente el tono de la voz.

“Vimos que el delfín no cambiaba el tono cuando producía el sonido con helio, lo que indica que ese tono no está definido por el tamaño de sus cavidades nasales y por tanto no está silbando”, asegura Peter Madsen, investigadora del departamento de Ciencias Biológicas en la Universidad de Aarhus y directora de la investigación en declaraciones recogidas por Discovery News.

Los sonidos que generan los delfines producen tenues vibraciones que son análogas a la de las cuerdas vocales de los humanos y otros animales terrestres, según el artículo publicado en Royal Society Biology Letters.

“El delfín hace sonidos con el tejido conectivo de la nariz vibrando a la frecuencia que desea producir para ajustar la tensión muscular. Es el mismo sistema que utilizamos los humanos para hacer sonidos con nuestras cuerdas vocales cuando hablamos“, añade Madsen.

¿Traducir a los delfines?

Los científicos ya saben que los delfines comparten información sobre ellos para identificarse y se ayudan a estar conectados, incluso cuando viajan.

El equipo de Madsen cree que todos los odontocetos, los mamíferos marinos que tienen dientes, entre los que se encuentran los delfines, las orcas, las belugas y otros cetáceos, usan este sistema para comunicarse.

Los investigadores se muestran esperanzados de encontrar un sistema que permita comprender el ‘idioma’ de los delfines.

Recientemente, un equipo de investigadores españoles ha desarrollado un sistema para ‘traducir’ a las belugas a partir de los sonidos que producen

Fuente: RTVE.es 07.09.2011 http://www.rtve.es/noticias/20110907/delfines-hablan-como-humanos/459935.shtml

El traductor simultáneo delfín-inglésLos delfines tienen un amplio catálogo de silbidos y sonidos pulsados con los que se comunican entre ellos.

  • Investigadores de Florida trabajan en un sistema de comunicación bidireccional
  • Un programa informático busca las “unidades” clave de su lenguaje
  • El buzo llevaría un sistema para emitir ‘palabras’ que comprendan los animales

Los científicos llevan décadas estudiando el lenguaje de los delfines, intentando descifrar lo que dicen con sus sonidos. Se ha logrado enseñarles a entender palabras y comandos con los que interactuar con los humanos, pero esta comunicación es solo en un sentido.

Ahora la investigación da un paso más, ya que en Florida un nuevo experimento trata de comunicarse con los delfines en tiempo real gracias a un equipo traductor submarino. Si el experimento tiene éxito, sería la primera vez que se produce una comunicación bidireccional con estos animales.

Denise Herzing, fundadora del Proyecto Delfines Salvajes en Florida, trabaja con el Instituto de Tecnología de Georgia, Atlanta, en un proyecto para ‘co-crear’ un lenguaje con los delfines utilizando sus sonidos para comunicarse, de forma que los delfines también puedan ‘decir’ cosas a los humanos.

Según recoge la revista New Scientist, a finales de los 90 la investigadora y su equipo llevaron a cabo el primer intento de comunicación bidireccional con los delfines. Consistía en asociar una serie de símbolos situados bajo el agua a unos sonidos y comandos, a modo de teclado. Cada vez que un delfin señalaba con su cuerpo uno de ellos, podían realizar una serie de peticiones, pero aunque logró llamar la atención de los animales, no tuvo demasiado éxito.

El nuevo sistema en desarrollo se compone de una computadora y dos hidrófonos que tratarán de captar toda la gama de los sonidos de los delfines, sus silbidos y cantos pulsados. Este es el primero de los retos, ya que pueden producir frecuencias hasta 10 veces más altas que lo que un humano es capaz de oír.

La idea es que el buzo lleve con él este dispositivo en una caja estanca, acompañada de un mando a distancia para que pueda seleccionar un sonido del catálogo con el que responder al delfín.

En busca de un patrón en su lenguaje

El traductor comenzará a probarse a mediados de este año. Con la primera versión los buzos podrán reproducir una de las ocho ‘palabras’ acuñadas por el equipo del Proyecto Delfines como por ejemplo, ‘pasa por el arco’.

El software escuchará si los delfines las reproducen e imitan y una vez que el sistema reconozca estas repeticiones, la idea es usarlas para descifrar cuales son las “unidades fundamentales” del lenguaje delfín, lo que equivaldría a las ‘vocales’, ‘consonantes’ y ‘sílabas’ de su particular idioma.

El programa informático utiliza una serie de algoritmos diseñados para ‘colar’ los datos y seleccionar cuál es el patrón del lenguaje y etiquetarlo. Una vez identificado el patrón, Herzing confía en poder combinar estas “unidades fundamentales” para acuñar ‘palabras’ y asociar comportamientos y objetos a estos sonidos.

Algunos expertos han recibido con excepticismo el proyecto y la misma Herzing admite lo ambicioso de su plan, “ni si quiera sabemos si los delfines tienen palabras” aunque añade que “solo podremos usar sus señales si las conocemos”.

Fuente: http://www.rtve.es/noticias/20110509/traductor-simultaneo-delfin-ingles/431441.shtml V. R. 09.05.2011

Decoding and
Deciphering Dolphin Sounds

For years we have known that dolphins have complex sounds and behavior. WDP has a database of underwater sounds and behaviors, collected over the past 26 years, for this free-ranging community of dolphins.  Two initiatives are currently underway. The first is to fully digitize and index our video database to extract patterns of communication.  We are collaborating with researchers at the University of California in San Diego using advanced frameworks of cognitive analysis to begin to look at the complexity of dolphin communication. In addition, we are working with a team at Georgia Tech to research the use of multiple algorithms designed for unique pattern discovery in dolphin sounds.

Two-Way Communication Between Dolphins and Humans – Phase II

Phase II research involves the development of a two-way communication system between the dolphins and humans.  From 1997 to 2000 we piloted the use of an underwater keyboard with our wild spotted dolphins, attempting to bridge the gap.  Although this was successful on some levels, we have been in need of a better system.  In the fall of 2010 we relaunched this project.  Joining forces with Georgia Tech in Atlanta, we are developing state-of-the-art technology specifically designed for this application.

(Wired Science , February 15, 2011; New Scientist, May 9, 2011; Zoe, May 16 2011; and Forbes, May 10, 2011)

Dr. Thad Starner and his students are creating a high-tech, dolphin- and human-friendly diver system to restart the exploration of a new interface in the wild. In addition, pattern recognition software is being developed to help us categorize and decode the dolphins’ natural sounds.

We are excited about the potential for this new technology as a tool for exploring dolphin cognition and communication in the wild.  Since communication is a part of cognition, this is a long-term endeavor and this summer is only the preliminary step to explorary this potential interface for two way work.

Why conduct this work?  Dolphins represent a superior non-human animal for cognitive work due to their advanced brains and sophisticated societies.  Discoveries in dolphin cognition will serve to further elevate the status of all animals on the planet, and help us define our relationship with them.  Knowing that dolphins have a complex communication system, it stands to reason that understanding it will involve many factors besides the sounds they make, including their environment, their behavior, their body posture, and their spatial and social associations.  This work holds the ability to “bridge the gap” and transcend the artificial boundaries between non-human animals and humans with the establishment of a mutually comprehensive communication system.

Fuente: http://www.wilddolphinproject.org/research/dolphin-communication/

To talk with aliens, learn to speak with dolphins

At the Wild Dolphin Projectin Jupiter, Florida, researchers train for contact by trying to talk with dolphins.

Behavioral biologist Denise Herzing started studying free-ranging spotted dolphins in the Bahamas more than two decades ago. Over the years, she noticed some dolphins seeking human company, seemingly out of curiosity.

“We thought, ‘This is fascinating, let’s see if we can take it further,’” Herzing said. “Many studies communicate with dolphins, especially in captivity, using fish as a reward. But it’s rare to ask dolphins to communicate with us.”

Dolphins have large, sophisticated brains, elaborately developed in the areas linked to higher-order thinking. They have a complex social structure, form alliances, share duties and display personalities. Put a mirror in their tank and they can recognize themselves, indicating a sense of self.

When trained, they have a remarkable capacity to pick up language. At the Dolphin Institute in Hawaii, Louis Herman and his team taught dolphins hundreds of words using gestures and symbols. Dolphins, they found, could understand the difference between statements and questions, concepts like “none” or “absent,” and that changing word order changes the meaning of a sentence. Essentially, they get syntax.

Some tantalizing studies even suggest dolphins share their own language (see sidebar, “Easier Language Through Math”). All are qualities we’d hope to see in an alien, and no daydream of contact is complete without some attempt at communication. Yet with dolphins, our attempts have involved teaching them to speak our language, rather than meeting in the middle.

Herzing created an open-ended framework for communication, using sounds, symbols and props to interact with the dolphins. The goal was to create a shared, primitive language that would allow dolphins and humans to ask for props, such as balls or scarves.

Divers demonstrated the system by pressing keys on a large submerged keyboard. Other humans would throw them the corresponding prop. In addition to being labeled with a symbol, each key was paired with a whistle that dolphins could mimic. A dolphin could ask for a toy either by pushing the key with her nose, or whistling.

Herzing’s study is the first of its kind. No one has tried to establish two-way communication in the wild.

“This is an authentic way to approach this, she’s not imposing herself on them,” said Lori Marino, the Emory University biologist who, with Hunter College psychologist Diana Reiss, pioneered dolphin self-recognition studies. “She’s cultivated a relationship with these dolphins over a very long time and it’s entirely on their terms. I think this is the future of working with dolphins.”

For each session, the researchers played with the dolphins for about half-an-hour, for a total of roughly 40 hours over the course of three years. They reported their findings of this pilot study in the December issue of Acta Astronautica.

Herzing’s team found that six dolphins, all young females, were interested in the game, and would come to play when the game was on. Young males were typically less social and less interested in humans. “This is when the females have a lot of play time,” Herzing said, “before they are busy being mothers.”

To Herzing’s surprise, some of her spotted dolphins recruited bottlenose dolphins, another species, to the game. This shows their natural curiosity, Herzing said. In the wild, dolphins communicate across cetacean species lines, coordinating hunting with other dolphins and even sharing babysitting duties.

Herzing found the study sessions were most successful when, before playing, the humans and dolphins swam together slowly and in synchrony, mimicked each other and made eye contact. These are signs of good etiquette among dolphins. Humans also signal their interest in someone with eye contact and similar body language. Perhaps these are universal — and extraterrestrial — signs of good manners.

Before we hope to understand extraterrestrials, then, perhaps we should practice with smart animals right here on Earth. Astronomer Laurance Doyle of the SETI Institute was struck by this thought at a recent conference.

“From the way the presenter was speaking, I thought he was going to announce that he had found a signal of extraterrestrial intelligence,” Doyle said. “We’ve been waiting for this for years, but I thought, ‘We’re not ready!’  We can’t even speak to the intelligent animals on Earth.”

Image: Two Atlantic spotted dolphins in the wild. (Ricardo Liberato)

Easier Language Through MathLaurance Doyle of the SETI Institute in Mountain View, California, also studies animal communication in preparation for extraterrestrial contact. Doyle uses information theory — a branch of math that analyzes the structure and relationships of information — to analyze radio signals, hoping to better detect intelligence in space.“Information theory is an example of an intelligence filter we can use to sift the signals we get from space,” Doyle said. “Otherwise, we might miss them.”Using information theory it’s possible to separate binary code from random 0s and 1s, for examples. By analyzing dolphin sounds, it’s possible to know that adults send information when they whistle, but not babies. Like human babies, they just babble until they’ve learned language. Information theory also shows that humpback whaleshave rules of grammar and syntax.“At SETI meetings we always ask ‘Are we alone?’” Doyle said. “No, we’re not alone. There are many animals communicating right here that we don’t understand.”Doyle is interested in applying information theory to bees. Social bees are capable of complex group decisions, it seems, but their intelligence is a product of the hive. He also plans to study the communication between trees, because they share information about pests and threats via chemicals.“Who knows? Brains might not be necessary,” Doyle said.

Fuente: http://www.wired.com/wiredscience/2011/02/seti-dolphins/

Dolphins and Language

Natural language? Dolphins produce various types of sounds, including clicks, burst-pulse emissions, and whistles. Clicks are used for echolocation, the dolphin’s form of sonar. Through echolocation, the dolphin can examine its world through sound, by listening to the echoes returning from objects struck by the clicks. Burst-pulse sounds may indicate the dolphin’s emotional state, ranging from pleasure to anger. However, these type of vocalizations have been little studied and much remains to be learned about them. Whistles may be used for communication, but it is still an open question as to whether, or how much, of whistle communication is intentional versus unintentional (e.g., rapidly repeated whistling may be elicited by stress, without any specific intention to convey that emotional state to others). During the 1960s, researchers attempted to determine whether the whistle vocalizations might be a form of language. Investigators recorded whistles from many dolphins in many different situations, but failed to demonstrate

sufficient complexity in the vocalizations to support anything approaching a human language system. Some of the early work instead pointed to the stereotypy of the whistles from individual dolphins, leading David and Melba Caldwell to suggest that the whistle functioned principally as a “signature,” with each individual dolphin producing a unique signature. Presumably, this enabled that individual to be identified by others. Other researchers have noted, however, that there can be a great deal of flexibility in the whistle. Douglas Richards, James Wolz, and Louis Herman, at the Kewalo Basin Marine Mammal Laboratory at the University of Hawaii, reported a study showing that a dolphin could use its whistle mode to imitate a variety of sounds generated by a computer and broadcast underwater into the dolphin’s habitat. Peter Tyack later reported that one dolphin could imitate another’s whistle, thereby possibly referring to or calling that individual. As was noted earlier, referring symbolically to another individual, or to some other object or event in the environment, is one of the basic characteristics of a language. However, we still do not know to what extent the dolphin’s whistles may be used to refer to things other than themselves or another dolphin. This is a fruitful area for additional study, however.

Although the evidence strongly suggests that dolphins do not possess a natural language, like the case for apes, it is still important and informative to study whether dolphins might nevertheless be able to learn some of the fundamental defining characteristics of human language. Any demonstration of language-learning competency by dolphins would bear on questions of the origins of human language, shifting the emphasis from the study of precursors in other hominoid species to common convergent characteristics in ape and dolphin that might lead to advanced communicative and cognitive capacities.

Early attempts at teaching language to dolphins. From the mid-1950s to the mid-1960s, John Lilly promoted the idea that bottlenosed dolphins (Tursiops truncatus) might possess a natural language. He based this supposition on this species’ exceptionally large brain with its richly developed neocortex. He reasoned that the large brain must be a powerful information processor having capabilities for advanced levels of intellectual accomplishment, including the development of a natural language. He set about to uncover the supposed language. Failing in that quest, he then attempted, also without success, to teach human vocal language (English) to dolphins he maintained in his laboratories. Dolphins have a rich vocal repertoire, but not one suited to the production of English phonemes. The procedures used by Lilly and the data he obtained were presented only sketchily, making any detailed analysis of his efforts at teaching language moot.

In the mid-1960s, Duane Batteau developed an automated system that translated spoken Hawaiian-like phonemes into dolphin-like whistle sounds that he projected underwater into a lagoon housing two bottlenosed dolphins. He then attempted to use these sounds as a language for conveying instructions to the dolphins. A major flaw in his approach, however, was that individual sounds were not associated with individual semantic elements, such as objects or actions, but instead functioned as holophrases, (complexes of elements). For example, a particular whistle sound instructed the dolphin to “hit the ball with your pectoral fin.” Another sound instructed the dolphins to “swim though a hoop.” Unlike a natural language, there was no unique sound to refer to hit or ball, or hoop, or pectoral fin, or any other unique semantic element. Hence, there was no way to recombine sounds (semantic elements) to create different instructions, such as “hit the hoop (rather than the ball) with your pectoral fin.” After several years of effort, the dolphins were able to learn to follow reliably the holophrastic instructions conveyed by each of 12 or 13 different sounds. However, because of the noted flaw in the approach to construction of a language, the experiment failed as a valid test of dolphin linguistic capabilities.

Kewalo Basin dolphin language studies. The work on dolphin language competencies by Louis Herman and colleagues at the Kewalo Basin Marine Mammal Laboratory in Honolulu was begun in the mid-1970s and emphasized language comprehension from the start. These researchers, working principally with a bottlenosed dolphin named Akeakamai housed at the

laboratory, constructed a sign language in which words were represented by the gestures of a person’s arms and hands. The words referred to objects in the dolphin’ habitat, to actions that could be taken to those objects, and to relationships that could be constructed between objects. There were also location words, left and right, expressed relative to the dolphin’s location, that were used to refer to a particular one of two objects having the same name, e.g., left hoop vs. right hoop. Syntactic rules, based on word order, governed how sequences of words could be arranged into sentences to extend meaning. The vocabulary of some 30 to 40 words, together with the word-order rules, allowed for many thousands of unique sentences to be constructed. The simplest sentences were instructions to the dolphin to take named actions to named objects. For example, a sequence of two gestures glossed as surfboard over directs the dolphin to leap over the surfboard, and a sequence of three gestures glossed as left Frisbee tail-touch directs the dolphin to touch the Frisbee on her left with her tail. More complex sentences required the dolphin to construct a relationship between two objects, such as taking one named object to another named object or placing one named object in or on another named object. To interpret relational sentences correctly, the dolphin had to take account of both word meaning and word order. For example, a sequence of three gestures glossed as person surfboard fetch tells the dolphin to bring the surfboard to the person (who is in the water), but surfboard person fetch, the same three gestures rearranged, requires that the person be carried to the surfboard. By incorporating left and right into these relational sentences, highly complex instructions could be generated. For example, the sequence of five gestures glossed as left basket right ball in asks the dolphin to place the ball on her right into the basket on her left. In contrast, the rearranged sequence right basket left ball in means the opposite, “put the ball on the left into the basket on the right.” The results published by Louis Herman, Douglas Richards, and James Wolz showed that the dolphin was proficient at interpreting these various types of sentences correctly, as evidenced by her ability to carry out the required instructions, including instructions new to her experience. These were the first published results showing convincingly an animal’s ability to process both semantic and syntactic information in interpreting language-like instructions. Semantics and syntax are considered core attributes of any human language.

Ronald Schusterman and Kathy Krieger tested whether a California sea lion named Rocky might be able to learn to understand sentence forms similar to those understood by the dolphin Akeakamai. Rocky was able to carry out gestural instructions effectively for simpler types of sentences requiring an action to an object. The object was specified by its class membership (e.g., “ball”) and in some cases also by its color (black or white) or size (large or small). In a later study, Schusterman and Robert Gisiner reported that Rocky was able to understand relational sentences requiring that one object be taken to another object. These reports suggested that the sea lion was capable of semantic processing of symbols and, to some degree, of syntactic processing. A shortcoming of the sea lion work, however, was the absence of contrasting terms for relational sentences, such as the distinction between “fetch” (take to) and “in” (place inside of or on top) demonstrated for the dolphin Akeakamai. Additionally, unlike the dolphin, the sea lion’s string of gestures were given discretely, each gesture followed by a pause during which the sea lion looked about to locate specified objects before being given the next gesture in the string. In contrast, gestural strings given to the dolphin Akeakamai were without pause, analogous to the spoken sentence in human language. Further, Rocky did not show significant generalization across objects of the same class (e.g., different balls), but unlike the dolphin seemed to regard a gesture as referring to a particular exemplar of the class rather than to the entire class. Thus, although many of the responses of the sea lion resembled those of the dolphin, the processing strategies of the two seemed different, and the concepts developed by the sea lion appeared to be more limited than those developed by the dolphin.

Akeakamai’s knowledge of the grammar of the language. As a test of Akeakamai’s grammatical knowledge of the language she had been taught, Louis Herman, Stan Kuczaj, and Mark Holder constructed anomalous gestural sentences. These were sentences that violated the syntactic rules of the language or the semantic relations among words. The researchers then studied the dolphin’s spontaneous responses to these sentences. For example, the researchers compared the dolphin’s responses to three similar gestural sequences: person hoop fetch, person speaker fetch, and person speaker hoop fetch. The first sequence is a proper instruction; it violates no semantic or syntactic rule of the learned language. It directs the dolphin to bring the hoop to the person, which the dolphin does easily. The second sequence is a syntactically correct sequence but is a semantic anomaly inasmuch as it directs the dolphin to take the underwater speaker, firmly attached to the tank wall, to the person. The dolphin typically rejects sequences like this, by not initiating any action. The final sequence is a syntactic anomaly in that there is no sequential structure in the grammar of the language that provides for three object names within a sequence. However, embedded in the four-item anomaly are two semantically and syntactically correct three-item sequences, person hoop fetch and speaker hoop fetch. The dolphin in fact typically extracts one of these subsets and carries out the instruction implicit in that subset, by taking the hoop to the person or to the underwater speaker.

These different types of responses revealed a rather remarkable and intelligent analysis of the sequences. Thus, the dolphin did not terminate her response when an anomalous initial sequence such as person speaker was first detected. Instead, she continued to process the entire sequence, apparently searching backward and forward for proper grammatical structures as well as proper semantic relationships, until she found something she could act on, or not. This analytic type of sequence processing is part and parcel of sentence processing by human listeners.

Understanding of symbolic references to absent objects. Louis Herman and Paul Forestell tested the dolphin Akeakamai’s understanding of symbolic

references to objects that were not present in the dolphin’s habitat at the time the reference was made. For this purpose, they constructed a new syntactic frame consisting of an object name followed by a gestural sign glossed as “Question”. For example, the two-item gestural sequence glossed as basket question asks whether a basket is present in the dolphin’s habitat. The dolphin could respond Yes by pressing a paddle to her right or Noby pressing a paddle to her left. Over a series of such questions, with the particular objects present being changed over blocks of trials, the dolphin was as accurate at reporting that a named object was absent as she was at reporting that it was present. These results gave a clear indication that the gestures assigned to objects were understood referentially by the dolphin, i.e., that the gestures acted as symbolic references to those objects.

Interpreting language instructions given through television displays. The television medium can display scenes that are representations of the real world, or sometimes of imagined worlds. As viewers, we understand this and often respond to the displayed content similarly to how we might respond to

the real world. We of course understand that it is a representation, and not the real world. It appears, however, that an appreciation of television as a representation of the real world does not come easily to animals, even to apes. Sue Savage-Rumbaugh wrote in her book, Ape Language, that chimpanzees show at most a fleeting interest in television, and that from their behavior it was not possible to infer that they were seeing anything more than changing patterns or forms. Her own language -trained chimpanzee subjects, Sherman and Austin, only learned to attend to and interpret television scenes after months of exposure in the presence of human companions who reacted to the scenes by exclaiming or vocalizing at appropriate times. Louis Herman, Palmer Morrel-Samuels and Adam Pack tested whether the dolphin Akeakamai might respond appropriately to language instructions delivered by a trainer whose image was presented on a television screen. Akeakamai had never been exposed to television of any sort previously. Then, for the first time, the researchers simply placed a television monitor behind one of the underwater windows in the dolphin’s habitat and directed Akeakamai to swim down to the window. On arriving there she saw an image of the trainer on the screen. The trainer then proceeded to give Akeakamai instructions through the familiar gestural language. The dolphin watched and then turned and carried out the first instruction correctly and also responded correctly to 11 of 13 additional gestural instructions given her at that same testing session. In further tests, Akeakamai was able to respond accurately even to degraded images of the trainer, consisting, for example, of a pair of white hands moving about in black space. The overall results suggested that Akeakamai spontaneously processed the television displays as representations of the gestural language she had been exposed to live for many years previously.

Implications

The results of the language comprehension work with the bonobo chimpanzee and the dolphin Akeakamai show many similarities, especially in the receptivity of the animals to the language formats used and in their proficiency at responding to sequences of symbols. The dolphin has been tested in more formal procedures than has the bonobo, leading to a fuller understanding of the dolphin’s grammatical competencies than has been attained for the chimp. The findings with the bottlenosed dolphin are in keeping with many other demonstrations of the cognitive abilities of this species. The advanced cognitive abilities of apes are also well documented. An early summary by Herman (1980, p. 421) still seems appropriate to accommodate the convergent cognitive and language-learning abilities of ape and dolphin: “The major link that cognitively connects the otherwise evolutionarily divergent (dolphins)… and primates may be social pressure–the requirement for integration into a social order having an extensive communication matrix for promoting the well-being and survival of individuals…. Effective functioning in such a society demands extensive socialization and learning. The extended maturational stages of the young primate or dolphin and the close attention given it by adults and peers…provide the time and tutoring necessary for meeting these demands. In general, high levels of parental care and high degrees of cortical encephalization go together…. It is not difficult to imagine that the extensive development of the brain in (dolphins)…and the resulting cognitive skills of some members of this group, have derived from the demands of social living, including both cooperation and competition among peers, expressed within the context of the protracted development of the young. These cognitive skills may in turn provide the behavioral flexibility that has allowed the diverse family of (dolphins)…to successfully invade so many different aquatic habitats and niches.”

Bibliography

Herman, L. M. (1986). Cognition and language competencies of bottlenosed dolphins. In Dolphin cognition and behavior: A comparative approach (R. J. Schusterman, J. Thomas, and F. G. Wood eds.) pp. 221-251. Lawrence Erlbaum Associates, Hillsdale, NJ.

Herman, L. M. (1989). In which Procrustean bed does the sea lion sleep tonight? The Psychological Record, 39, 19-50.

Herman, L. M. and Forestell, P. H. (1985). Reporting presence or absence of named objects by a language-trained dolphin. Neuroscience and Biobehavioral Reviews, 9, 667-691.

Herman, L. M. and Tavolga, W. N. (1980). The communication systems of cetaceans. In Cetacean behavior: Mechanisms and functions (L. M. Herman, ed.) pp. 149-209. Wiley Interscience, New York.

Herman, L. M. & Uyeyama, R. K. (1999). The dolphin’s grammatical competency: Comments on Kako (1998). Animal Learning & Behavior 27, 18-23.

Herman, L.M., Kuczaj, S. III, & Holder, M. D. (1993). Responses to anomalous gestural sequences by a language-trained dolphin: Evidence for processing of semantic relations and syntactic information. Journal of Experimental Psychology: General 122, 184-194.

Herman, L. M., Morrel-Samuels, P. and Pack, A. A. (1990). Bottlenosed dolphin and human recognition of veridical and degraded video displays of an artificial gestural language. Journal of Experimental Psychology: General, 119, 215-230.

Herman, L. M., Richards, D. G. & Wolz, J. P. (1984). Comprehension of sentences by bottlenosed dolphins. Cognition, 16, 129-219.

Fuente: http://www.dolphin-institute.org/resource_guide/animal_language.htm

"So long, and thanks for all the fish" (Image: Flip Nicklin/Minden/FLPA)“So long, and thanks for all the fish” (Image: Flip Nicklin/Minden/FLPA)

Talk with a dolphin via underwater translation machine

Editorial:The implications of interspecies communication

A DIVER carrying a computer that tries to recognise dolphin sounds and generate responses in real time will soon attempt to communicate with wild dolphins off the coast of Florida. If the bid is successful, it will be a big step towards two-way communication between humans and dolphins.

Since the 1960s, captive dolphins have been communicating via pictures and sounds. In the 1990s, Louis Herman of the Kewalo Basin Marine Mammal Laboratory in Honolulu, Hawaii, found that bottlenose dolphins can keep track of over 100 different words. They can also respond appropriately to commands in which the same words appear in a different order, understanding the difference between “bring the surfboard to the man” and “bring the man to the surfboard”, for example.

But communication in most of these early experiments was one-way, says Denise Herzing, founder of the Wild Dolphin Project in Jupiter, Florida. “They create a system and expect the dolphins to learn it, and they do, but the dolphins are not empowered to use the system to request things from the humans,” she says.

Since 1998, Herzing and colleagues have been attempting two-way communication with dolphins, first using rudimentary artificial sounds, then by getting them to associate the sounds with four large icons on an underwater “keyboard”.

By pointing their bodies at the different symbols, the dolphins could make requests – to play with a piece of seaweed or ride the bow wave of the divers’ boat, for example. The system managed to get the dolphins’ attention, Herzing says, but wasn’t “dolphin-friendly” enough to be successful.

Herzing is now collaborating with Thad Starner, an artificial intelligence researcher at the Georgia Institute of Technology in Atlanta, on a project named Cetacean Hearing and Telemetry (CHAT). They want to work with dolphins to “co-create” a language that uses features of sounds that wild dolphins communicate with naturally.

Knowing what to listen for is a huge challenge. Dolphins can produce sound at frequencies up to 200 kilohertz – around 10 times as high as the highest pitch we can hear – and can also shift a signal’s pitch or stretch it out over a long period of time.

The animals can also project sound in different directions without turning their heads, making it difficult to use visual cues alone to identify which dolphin in a pod “said” what and to guess what a sound might mean.

To record, interpret and respond to dolphin sounds, Starner and his students are building a prototype device featuring a smartphone-sized computer and two hydrophones capable of detecting the full range of dolphin sounds.

A diver will carry the computer in a waterproof case worn across the chest, and LEDs embedded around the diver’s mask will light up to show where a sound picked up by the hydrophones originates from. The diver will also have a Twiddler – a handheld device that acts as a combination of mouse and keyboard – for selecting what kind of sound to make in response.

Herzing and Starner will start testing the system on wild Atlantic spotted dolphins (Stenella frontalis) in the middle of this year. At first, divers will play back one of eight “words” coined by the team to mean “seaweed” or “bow wave ride”, for example. The software will listen to see if the dolphins mimic them. Once the system can recognise these mimicked words, the idea is to use it to crack a much harder problem: listening to natural dolphin sounds and pulling out salient features that may be the “fundamental units” of dolphin communication.

The researchers don’t know what these units might be. But the algorithms they are using are designed to sift through any unfamiliar data set and pick out interesting features (see “Pattern detector”). The software does this by assuming an average state for the data and labelling features that deviate from it. It then groups similar types of deviations – distinct sets of clicks or whistles, say – and continues to do so until it has extracted all potentially interesting patterns.

Once these units are identified, Herzing hopes to combine them to make dolphin-like signals that the animals find more interesting than human-coined “words”. By associating behaviours and objects with these sounds, she may be the first to decode the rudiments of dolphins’ natural language.

Justin Gregg of the Dolphin Communication Project, a non-profit organisation in Old Mystic, Connecticut, thinks that getting wild dolphins to adopt and use artificial “words” could work, but is sceptical that the team will find “fundamental units” of natural dolphin communication.

Even if they do, deciphering their meanings and using them in the correct context poses a daunting challenge. “Imagine if an alien species landed on Earth wearing elaborate spacesuits and walked through Manhattan speaking random lines from The Godfather to passers-by,” he says.

“We don’t even know if dolphins have words,” Herzing admits. But she adds, “We could use their signals, if we knew them. We just don’t.”

Pattern detector

The software that Thad Starner is using to make sense of dolphin sounds was originally designed by him and a former student, David Minnen, to “discover” interesting features in any data set. After analysing a sign-language video, the software labelled 23 of 40 signs used. It also identified when the person started and stopped signing, or scratched their head.

The software has also identified gym routines – dumb-bell curls, for example – by analysing readings from accelerometers worn by the person exercising, even though the software had not previously encountered such data. However, Starner cautions that if meaning must be ascribed to the patterns picked out by the software, then this will require human input.

http://www.newscientist.com/article/mg21028115.400-talk-with-a-dolphin-via-underwater-translation-machine.html://

Hearing and Echolocation in Dolphins

  • S.H. Ridgway
  • Space and Naval Warfare Systems Center, San Diego, CA, USA
  • W.W.L. Au
  • University of Hawaii, Kailua, Hawaii, USA

The small whales called dolphins, such as the bottlenose dolphin Tursiops truncatus, have sensitive, broadband hearing extending to at least 150 kHz. Using ultra-brief, broadband, intense pulses that may reach 230 dB re 1 μPa at 1 m, this dolphin may have a maximum echolocation range of 100–600 m in the ocean. A narrow elongated cochlea coupled to a hypertrophied central auditory nervous system allows for rapid processing of echoes. All dolphins have relatively large brains. Some, not much larger than human body size, may have brains of 1500 g. Much of the large dolphin brain may be related to the need for rapid processing of echoes in water where sound travel is almost five times as fast as in air.

Figure 6. Sound reception characteristics of bottlenose dolphins. Receiving (a) vertical beam patterns and (b) horizontal beam patterns for frequencies of 30, 60, and 120 kHz.

http://www.sciencedirect.com/science/article/pii/B9780080450469002631

The object behind the echo: dolphins (Tursiops truncatus) perceive object shape globally through echolocation

  • Adam A Packa, b, c, , ,
  • Louis M Hermana, b, c,
  • Matthias Hoffmann-Kuhnta, d,
  • Brian K Branstettera, b
  • a Kewalo Basin Marine Mammal Laboratory, 1129 Ala Moana Boulevard, Honolulu, HI 96814, USA
  • b Psychology Department, University of Hawaii, Honolulu, HI, USA
  • c The Dolphin Institute, 420 Ward Avenue, Suite 212, Honolulu, HI 96814, USA
  • d Institut für Verhaltensbiologie, Freie Universität Berlin, Haderslebener Str. 9, 12163 Berlin, Germany
  • Received 6 June 2001. Revised 8 October 2001. Accepted 15 October 2001. Available online 27 November 2001.

Two experiments tested a bottlenosed dolphin’s ability to match objects across echolocation and vision. Matching was tested from echolocation sample to visual alternatives (E–V) and from visual sample to echolocation alternatives (V–E). In Experiment 1, the dolphin chose a match from among three-alternative objects that differed in overall (global) shape, but shared several ‘local’ features with the sample. The dolphin conducted a right-to-left serial nonexhaustive search among the alternatives, stopping when a match was encountered. It matched correctly on 93% of V–E trials and on 99% of E–V trials with completely novel combinations of objects despite the presence of many overlapping features. In Experiment 2, a fourth alternative was added in the form of a paddle that the dolphin could press if it decided that none of the three-alternatives matched the sample. When a match was present, the dolphin selected it on 94% of V–E trials and 95% of E–V trials. When a match was absent, the dolphin pressed the paddle on 74% and 76%, respectively, of V–E and E–V trials. The approximate 25% error rate, which consisted of a choice of one of the three non-matching alternatives in lieu of the paddle press, increased from right to center to left alternative object, reflecting successively later times in the dolphin’s search path. A weakening in memory for the sample seemed the most likely cause of this error pattern. Overall, the results gave strong support to the hypothesis that the echolocating dolphin represents an object by its global appearance rather than by local features.

Video viendo a través de la caja

http://www.sciencedirect.com/science/article/pii/S0376635701002005

Whistle discrimination and categorization by the Atlantic bottlenose dolphin (Tursiops truncatus): A review of the signature whistle framework and a perceptual test

  • Heidi E. Harleya, b, ,
  • a Division of Social Sciences, New College of Florida, 5800 Bay Shore Road, Sarasota, FL 34243, United States
  • b The Seas, Epcot®, Walt Disney World® Resort, Lake Buena Vista, FL, United States

Dolphin whistles vary by frequency contour, changes in frequency over time. Individual dolphins may broadcast their identities via uniquely contoured whistles, “signature whistles.” A recent debate concerning categorization of these whistles has highlighted the on-going need for perceptual studies of whistles by dolphins. This article reviews research on dolphin whistles as well as presenting a study in which a captive, female, adult bottlenose dolphin performed a conditional matching task in which whistles produced by six wild dolphins in Sarasota Bay were each paired with surrogate producers, specific objects/places. The dolphin subject also categorized unfamiliar exemplars produced by the whistlers represented by the original stimuli. The dolphin successfully discriminated among the group of whistles, associated them with surrogate producers, grouped new exemplars of the same dolphin’s whistle together when the contour was intact, and discriminated among same-contour whistles produced by the same dolphin. Whistle sequences that included partial contours were not categorized with the original whistlers. Categorization appeared to be based on contour rather than specific acoustic parameters or voice cues. These findings are consistent with the perceptual tenets associated with the signature whistle framework which suggests that dolphins use individualized whistle contours for identification of known conspecifics.

http://www.sciencedirect.com/science/article/pii/S0376635707003142

Whistle characteristics in free-ranging bottlenose dolphins (Tursiops truncatus) in the Mediterranean Sea: Influence of behaviour

Bruno Díaz López

Bottlenose dolphins (Tursiops truncatus) are an extremely vocal mammalian species and vocal communication plays an important role in mediating social interactions. Very little is known about how wild bottlenose dolphins use whistles in different contexts and no data exist for context specificity of whistle characteristics. This study describes, for the first time in the Mediterranean Sea, the whistle characteristics of bottlenose dolphins in their natural repertoire. Over 35 h of behavioural observations and simultaneous recordings, 3032 tonal, frequency modulated whistles were detected. Our findings further support, for the first time in wild bottlenose dolphins, the suggestion that acoustic features may be good predictors of behavioural state and vice versa. These results advocate that these parameters may be used to communicate specific information on the behavioural context of the individuals involved. Additionally, visual inspection reveals that upsweeps and multi-looped whistles play an important role in the natural communication system of bottlenose dolphins. Likewise, this study demonstrates how dynamic bottlenose dolphin whistle characteristics are and how important it is to consider many factors in analysis. High intra-specific variability in whistle characteristics demonstrates its integral role in the complex social lives of wild bottlenose dolphins.

http://www.sciencedirect.com/science/article/pii/S1616504710000960

Dolphin Signature Whistles

  • Woods Hole Oceanographic Institution, Woods Hole, MAUSA
  • University of St. Andrews, St. Andrews, Fife, Scotland, UK

The term ‘signature’ has often been applied to animal vocalizations when an individually distinctive pattern was found in them. The vast majority of animals achieve this by means of voice cues, which result from individual variability in the shape and size of the vocal tract. Dolphin signature whistles are qualitatively different from most individually distinctive signals seen in other mammalian species. Identity is encoded in a frequency modulation pattern that is learned or invented early in life. These whistles are used in individual recognition and in maintaining group cohesion and seem to function similarly to human names.

Figure 2. Spectrograms and associated sound files of three signature whistles from each of 10 bottlenose dolphins, recorded during brief capture–release events in Sarasota, Florida. Age and sex for each individual are noted (data courtesy of the Chicago Zoological Society’s Sarasota Dolphin Research Program). Several different signature whistle contour types are illustrated, including multiloop whistles with varying numbers of connected or disconnected loops (dolphins A, B, C, D, E, G, H, J), including examples of distinct introductory (dolphin J) and terminal loops (dolphins D, F, G, H), and whistles with no loop structure (dolphins E, I). Note the stability of the contour even with variation in frequency parameters (e.g., dolphins A, D, G) and duration (e.g., dolphins I and J). Spectrograms were made in Avisoft SASLAB Pro, using a 256 pt FFT, 50% overlap and FlatTop window. The color scheme ranged from light blue–dark blue–purple–red–yellow–light green–green, with green being the loudest portions of the signal. Recordings were made with different types of recording equipment; prior to 1989, most recordings were made on Sony or Marantz stereo cassette recorders, with upper frequency limits of 15–20 kHz; thus, harmonics are less noticeable in these recordings. Later recordings were made on hifi video cassette recorders, with frequency responses extending to above 30 kHz. High-pass filters, ranging from 500 Hz to 2.5 kHz, were used on some sound files to reduce extraneous noise.

http://www.sciencedirect.com/science/article/pii/B9780080453378000164

Aquí hay sonidos pero no se pueden descargar

Contribution of various frequency bands to ABR in dolphins

  • Vladimir V. Popov,
  • Alexander Ya. Supin

Auditory brainstem responses (ABR) to clicks and noise bursts of various frequency bands and intensities were recorded in two bottlenosed dolphins, Tursiops truncatus. The purpose was to assess contributions of various parts of the cochlear partition to ABR and travelling wave velocity in the cochlea. At band-pass filtered stimuli (1–0.25 oct wide), ABR amplitude increased with increasing stimulus frequency, thus indicating higher contribution of basal cochlear parts. At high-pass and low-pass filtered stimuli, ABR amplitude increased with passband widening. However, the sum of all narrow-band contributions was a waveform of higher amplitude than the real ABR evoked by the wide-band stimulus. Applying a correction based on an assumption that the ‘internal spectrum’ is about 0.4 oct wider than the nominal stimulus spectrum resulted in the sum of narrow-band contributions equal to the wide-band ABR. The travelling wave velocity was computed based on ABR latencies and assigned a frequency of 128 kHz to the basal end of the cochlea. The computation gave values from 38.2 oct/ms at the proximal end of the basilar membrane to 4.0 oct/ms at a distance of 3.25 oct (13.5 kHz).

http://www.sciencedirect.com/science/article/pii/S0378595500002343

SETI meets a social intelligence: Dolphins as a model for real-time interaction and communication with a sentient species

  • Denise L. Herzinga, b, ,
  • a Wild Dolphin Project, P.O. Box 8436, Jupiter, FL 33468, USA
  • b Department of Biological, Psychological Sciences, Florida Atlantic University, Boca Raton, FL, USA

In the past SETI has focused on the reception and deciphering of radio signals from potential remote civilizations. It is conceivable that real-time contact and interaction with a social intelligence may occur in the future. A serious look at the development of relationship, and deciphering of communication signals within and between a non-terrestrial, non-primate sentient species is relevant. Since 1985 a resident community of free-ranging Atlantic spotted dolphins has been observed regularly in the Bahamas. Life history, relationships, regular interspecific interactions with bottlenose dolphins, and multi-modal underwater communication signals have been documented. Dolphins display social communication signals modified for water, their body types, and sensory systems. Like anthropologists, human researchers engage in benign observation in the water and interact with these dolphins to develop rapport and trust. Many individual dolphins have been known for over 20 years. Learning the culturally appropriate etiquette has been important in the relationship with this alien society. To engage humans in interaction the dolphins often initiate spontaneous displays, mimicry, imitation, and synchrony. These elements may be emergent/universal features of one intelligent species contacting another for the intention of initiating interaction. This should be a consideration for real-time contact and interaction for future SETI work.

http://www.sciencedirect.com/science/article/pii/S0094576510000287

Low Frequency Dolphin Sounds

Mar 25, 2012 No comments By Blair Irvine

Barks, yelps, thunks, grunts, chirps, and squawks are little-studied and infrequent sounds emitted by different dolphin species.

They are called low frequency narrow band (LFN) sounds, and they seem to be associated with socializing, sexual, or aggressive behavior, or possibly foraging activities.

LFN sounds have  conservation implications because acoustic communication is particularly important in inshore areas where vision is often limited.

Dolphins are well known to emit whistles, echolocation, and burst-pulses.  Click here to hear samples.

  • Whistles are tonal signals, audible to humans, which have a social function.
  • Echolocation consists of short, high intensity pulses produced in rapid succession in “click trains,” and it is used for navigation and to capture prey.
  • A burst-pulse is acoustically similar to echolocation pulses, but with higher pulse rates.

How LFN sounds fit into the dolphin sound repertoire is unclear because LFN sounds infrequently are heard, and they are not often reported in the scientific literature.

A recently published article, however, focuses on LFN sounds, comparing them across dolphin populations in Sarasota Bay, Tampa Bay, and the Mississippi sound in the northern Gulf of Mexico.

This is important research because so little is know about the context LFN communication. The frequencies of these sounds are below what is normally thought of as the range of good hearing in bottlenose dolphins.

Noise from boat motors potentially could interfere with, or mask LFN sounds, thus limiting dolphin communication in areas with high motorboat use.  SDRP studies have shown that dolphins whistle more frequently when boats approach, but the context is unclear.

The research article was published in the Journal of the Acoustical Society of America.  SDRP Director Randall Wells as one of the co-authors, and former SDRP intern and graduate student Ester Quintana-Rizzo also is a co-author.

Simard, P., Lace, N., Gowans, S., Quintana-Rizzo, E., Kuczaj, II., S. A., Wells, R. S., & Mann, D. A. (2011), Low frequency narrow-band calls in bottlenose dolphins (Tursiops truncatus): Signal properties, function, and conservation implications J. Acoust. Soc. Am. 130, 3068  DOI:10.1121/1.3641442

Abstract

Dolphins routinely use sound for social purposes, foraging and navigating. These sounds are most commonly classified as whistles (tonal, frequency modulated, typical frequencies 5–10 kHz) or clicks (impulsed and mostly ultrasonic). However, some low frequency sounds have been documented in several species of dolphins. Low frequency sounds produced by bottlenose dolphins (Tursiops truncatus) were recorded in three locations along the Gulf of Mexico. Sounds were characterized as being tonal with low peak frequencies (mean 1⁄4 990 Hz), short duration (mean 1⁄4 0.069 s), highly harmonic, and being produced in trains. Sound duration, peak frequency and number of sounds in trains were not significantly different between Mississippi and the two West Florida sites, however, the time interval between sounds within trains in West Florida was significantly shorter than in Mississippi (t 1⁄4 p 3.001, p 1⁄4 0.011). The sounds were significantly correlated with groups engaging in social activity (F1⁄48.323, p1⁄40.005). The peak frequencies of these sounds were below what is normally thought of as the range of good hearing in bottlenose dolphins, and are likely subject to masking by boat noise.

http://sarasotadolphin.org/2012/03/25/low-frequency-dolphin-sounds/

Automated Categorisation of Bottlenose

Dolphin (Tursiops truncatus) Whistles

Charlotte A Dunn

http://bahamaswhales.org/resources/C_Dunn_MRes_Thesis.pdf

Podcasts de The dolphin communication project. Mp3 http://bassflava.com/mp3/Where+does+the+word+%26%23039%3Bdolphin%26%23039%3B+come+from%3F-song-817617.html

Sonidos http://sarasotadolphin.org/dolphin-life/communication-acoustics/dolphin-sounds/

Buscar video audio

Sarasota Dolphin Signature Whistle Catalog, property of the Woods Hole Oceanographic Institution and the Chicago Zoological Society Dolphin Research and Conservation Institute

Investigar más http://www.pnas.org/content/103/21/8293.full

Seguir buscando a partir de la página 2 http://www.sciencedirect.com/science?_ob=ArticleListURL&_method=list&_ArticleListID=1940088766&_sort=r&_st=13&view=c&_acct=C000228598&_version=1&_urlVersion=0&_userid=10&md5=815a77e4b64cafed21162156fa2338c4&searchtype=a

http://www.rtve.es/noticias/20110816/cientificos-espanoles-traducen-las-belugas/455147.shtml

Acerca de sonopuntura.com
Investigación sobre sonido y vibraciones en la salud

2 Responses to El lenguaje de los delfines

  1. Pingback: Thermo Cleanse Extreme

  2. Madelyn says:

    I like your entire post. Thank you for sharing this brilliant info. I am thrilled now and planning to check other posts.

Responder

Introduce tus datos o haz clic en un icono para iniciar sesión:

Logo de WordPress.com

Estás comentando usando tu cuenta de WordPress.com. Cerrar sesión / Cambiar )

Imagen de Twitter

Estás comentando usando tu cuenta de Twitter. Cerrar sesión / Cambiar )

Foto de Facebook

Estás comentando usando tu cuenta de Facebook. Cerrar sesión / Cambiar )

Google+ photo

Estás comentando usando tu cuenta de Google+. Cerrar sesión / Cambiar )

Conectando a %s

A %d blogueros les gusta esto: