Report from the Acoustical Society meeting New Orleans

 

FishEyeData.comFish Eye Collaborative heat maps of fishes communicating
This week I’m down in New Orleans at an Acoustical Society of America (ASA) conference. I enjoy ASA conferences because I get to fraternize with people who are involved with my field of bioacoustics. But I also get to rub elbows with people who are focused on acoustical disciplines completely outside of my field – like acoustical seismology, medical imaging, and speech and language acoustics.
One of the more interesting presentations in the “Musical Acoustics” session by Chirag Gokani was the evaluation of the physics of guitar strumming and picking techniques by jazz legends Joe Pass and Wes Montgomery. Perhaps the last line of inquiry where I’d expect to see differential equations.
The presentation was particularly brilliant, only to be eclipsed by Chirag’s own excellent demonstration of what he’s learned by playing “Stella by Starlight” for the musical acousticians in the session (played in the link).
Monday was just my first day in the “Animal Bioacoustics” session exploring data processing strategies of the ambiguous and cluttered soundscapes of fishes. There’s was a lot of work being presented, focused on converting the mind-bending hash of fish sounds heard underwater into individual species identification (for example).
This has been a perennial problem in identifying any ocean sounds because what can be heard (or sensed) underwater can easily be much farther away than they can be seen. So while we can hear distinct underwater ocean sounds, we rarely see where they come from.
And this is exacerbated by the fact that humans can’t orient or localize underwater sounds because our localization cues – provided by the “Time Difference of Arrival” between our two ears in air, get all scrambled up when the speed of sound underwater is five times faster than in air.
In Monday’s session on “Progress in Fish Bioacoustics,” there were some really informative papers about using “machine learning” to segregate and classify sounds from individual species, and even individual fishes in a cluttered sound field.
One of the more remarkable presentations introduced a method of visually highlighting fish sounds by localizing their phonations and directly correlating these sounds with live video displays through “heat maps” of who was making the sound. This was pretty spectacular, because usually all we get from audio recordings are clacks, knocks, grunts, chirps, and scratches – with out any context.
But when their localization technology laid video heat maps over chaotic sounds of the reef, you could actually see “conversations” between conspecific fish in their habitat. Prior to this we could only speculate that fishes were communicating with each other. But one of the sound/videos clearly displayed what I could only characterize as exchanges between conspecifics.
This system will help animal behaviorists put fish phonations in spatial and community context.
I suspect this will really open up our understanding of the culture of fishes, much like video drones have really advanced our understanding of how whales and dolphins play with each other (substantiating my hypothesis that “play is the universal language.”)
This was just Monday. Tuesday was equally informative in the Animal Bioacoustics session – although technically much more complex – about using fiber optic cables in the ocean as “Distributed Acoustical Sensors” – because sound impinging on these cables physically distort them in ways that can be deciphered into time, frequency, and location cues.
This is a bit too technically cryptic to explain herein, except to say that the zillions of kilometers of oceanic communications cables can also be used as “distributed hydrophones,” revealing maps of everything from whale phonations, aggregations of gamete-dispersal fishes, marine seismic activity, and shipping transects.
This is not my field, but I was spellbound by some of the presentations.
As you receive this I will be in day 3 of the meeting. Wednesday has less scheduled activity, perhaps because after 90 years of ASA conferences I suspect the conveners wisely know that we attendees need to inhale a bit.
I am presenting a paper on Thursday about the Underwater Internet of Things. This is one of OCR’s banner issues.

 

Subscribe
Notify of
guest

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments