Beyond Laurel/Yanny: An Autoencoder-Enabled Search for Polyperceivable Audio

ACL 2021  ·  Kartik Chandra, Chuma Kabaghe, Gregory Valiant ·

The famous {``}laurel/yanny{''} phenomenon references an audio clip that elicits dramatically different responses from different listeners. For the original clip, roughly half the population hears the word {``}laurel,{''} while the other half hears {``}yanny.{''} How common are such {``}polyperceivable{''} audio clips? In this paper we apply ML techniques to study the prevalence of polyperceivability in spoken language. We devise a metric that correlates with polyperceivability of audio clips, use it to efficiently find new {``}laurel/yanny{''}-type examples, and validate these results with human experiments. Our results suggest that polyperceivable examples are surprisingly prevalent in natural language, existing for {\textgreater}2{\%} of English words.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here