Learning to Speak Whale
Teaching technology to learn a language for us: How AI is used to detect and listen to whale calls.
BY KATIE GONZALEZ
Learning another language can be a difficult task that requires a lot of practice and dedication, and trying to learn a language from another species can seem impossible. Today, with the use of computer learning, humans are teaching computers to speak whale.
Why do scientists want to speak whale?
Whales were extensively commercially hunted during the 20th century for their meat and blubber. Concern for whale populations came to light in the 1970s when the humpback whale was added to the endangered species list. However, whales weren’t extensively studied until the early 1990s when the U.S. Navy detected whale calls on their fixed hydrophone array system while searching for enemy submarines after the Cold War. As of today six out of thirteen great whale species are still on the endangered species list, and scientists want to learn more about how whales are being affected by many environmental and anthropogenic factors. Their calls can help us to understand more about them by quantifying their populations, discovering behavioral changes over time, and finding rationale for their distribution and movement.
What can we learn?
Despite decades of science, many uncertainties about the behavior and geographic distributions of whales exist. Scientists use a variety of methods to detect and track whales, from visual observations, catch records, implant tracking, DNA testing, and passive acoustic arrays. Each method can be tailored to address a variety of questions, such as; Are a certain species of whale from two regions in the Eastern Pacific varied in genetics?, What is the seasonal migration pattern of a species of whale?, or How is the population of a whale species changing over time?
The introduction of bottom-mounted hydrophones is especially important for learning more about life in remote marine locations around the world that scientists have had no information until now. A hydrophone is essentially an underwater microphone that records sounds in a range of frequencies. Hydrophones can reveal information about whales that might not be easily determined from visual observations alone, such as when and where whale species are singing and give information on whether or not the animals have changed their distribution or how their songs are evolving over the years. The scientific community is still baffled as to why these whales’ calls are changing over time, but more research focusing on the calls of whales can help gain insight into their relatively unknown lives.
Why computer learning?
Currently there are thousands of hours of recordings from hydroacoustic arrays at many different sites around the world, and studies are generally constrained. It would be impossible for a human to listen to the entirety of these recordings and do any insightful research with them given the sheer amount of data available. To address this issue, computer learning has provided a solution to detect whale calls from a plethora of recorded data.
Once a whale’s call has been identified and verified by a human it can be analyzed using a number of tools available from Artificial Intelligence (AI). AI allows for building a model that analyzes and then predicts the identity of a whale species according to its signature audio spectrograms. The AI can then evolve to recognize and flag portions of sound that have similar characteristics of those calls. What would take a lifetime for a human to sort though can be done in hours with programming and application of supervised and unsupervised learning.
As relatively unknown and underexplored areas of the world are now able to come to the surface of human understanding, we seek to answer questions about what the future can hold for us. With the great advancement in technology seen within the last two decades, it’s no surprise that some of the largest questions in science today are being tackled with the help of artificial intelligence.