Wednesday, August 25, 2010

How Useful is the "Audience vs. Expert" Dichotomy?

When it comes to user participation in cultural institutions and the arts, it's popular to launch projects that pit visitors against experts. There was Click! at the Brooklyn Museum, where you could track how people of various levels of art expertise rated crowd-contributed photographs. And now, at the Walker Art Center, there's 50/50, an upcoming print show in which half the prints will be chosen by the public, half by curators. Even ArtPrize, the "radically open" art festival that was judged last year by public vote alone, will incorporate a juried contest as well this year.

There's a sexiness to the perception of divergence between expert and public opinion. It's what keeps curators curators and the public public. But I'm not sure how much value there is to that difference. Instead, I'd like to see us asking broader questions about process, like:
  • How do different people arbitrate the value of a piece of art, a historical artifact, or a piece of scientific evidence?
  • What tools do we use? What expectations and biases and experience and expertise come into play?
  • How do we know what we know, and how do we make judgments of preference?
I've been thinking about this as I prep some interactive prototypes for the Experience Music Project and Science Fiction Museum, a Seattle-based museum of pop culture. One prototype is based on the Billboard Top 10 charts for pop music. Every week, Billboard publishes charts based on airplay, record sales, and now, digital downloads and streaming.

For the interactive prototype, we're letting visitors construct their own "Top 10" and compare it to the Billboard charts. After some research, I decided it would be interesting for people to compare their favorites to the Top 10 over all time rather than focusing solely on the current stats. I recalled that Rolling Stone magazine had put out a list of "the 500 greatest songs of all time" a few years ago, and decided to pull it up to see how it compared to Billboard's top songs. The Billboard list was determined by normalized sales and airplay from 1958-2009, whereas the Rolling Stone list was selected by 172 musicians, critics, and industry professionals.

Here are the two lists' top ten songs:
I was pretty amazed to notice two things:
  1. These lists share only one song in common, "Hey Jude" at position number 8.
  2. The Billboard list is more diverse than the Rolling Stone list in many ways. It features more musical genres, more women, and more chronological spread.
Going further down the lists, the divergence between them continues. While The Beatles top both lists when it comes to artist representation, none of the top twenty artists on the Rolling Stone list are women, whereas Billboard has seven. Bob Dylan, the third most represented artist on the Rolling Stone list, doesn't even place in Billboard's top one hundred.

This isn't a direct "audience versus expert" comparison. Billboard charts are far from egalitarian--particularly when it comes to the radio, money plays a big role in determining who hits the airwaves. And I wouldn't be surprised if the music industry insiders who contributed to the Rolling Stone list are the same people who helped launch LeAnn Rimes and Toni Braxton to fame.

This is not a post about power and diversity in the music industry. It is, however, a suggestion that perhaps critics and tastemakers are biased in their preferences in a somewhat homogenous way. Is the Rolling Stone list better or worse than the Billboard list? It depends who you ask. But it's definitely less broad and more reflective of a particular perspective on what makes good music.

There are really interesting questions inherent in the difference between these two lists and how we arbitrate taste in pop culture as consumers, as artists, as industry professionals, and as critics. I'm hopeful that our little prototype can help us discuss these questions with visitors at the Experience Music Project.

Ultimately, "audience versus expert" may be a red herring that distracts from a larger discussion about personal preference and cultural bias. One of the surprises of Click! was the outcome that the top 10 photographs did not diverge widely based on evaluator expertise. Five of the top ten photographs were top picks for people from at least four different levels of expertise, and all the top ten were selected by people with at least two different levels of expertise. As Wisdom of the Crowds author James Surowieckinoted, "it suggests (though it doesn’t prove) that at least in some mediums, the gap between popular and elite taste may be smaller than we think."

And maybe there are other gaps that are worth exploring.
blog comments powered by Disqus