Showing posts with label blind versus sighted listening tests. Show all posts
Showing posts with label blind versus sighted listening tests. Show all posts

Friday, April 22, 2016

A Virtual Headphone Listening Test Method

Fig. 1 The Harman Headphone Virtualizer App allows listeners to make double-blind comparisons of  different headphones through a high-quality replicator headphone. The  app has two listening modes: a sighted mode (shown) and a blind mode (not shown) where listeners are not biased by non-auditory factors (brand, price, celebrity endorsement,etc). Clicking on the picture will show a larger version.

Early on in our headphone research  we realized there was a need to develop a listening test method that allowed us to conduct more controlled double-blind listening tests on different headphones.  This was necessary in order to remove tactile cues (headphone weight and clamping force), visual and psychological biases  (e.g. headphone brand, price, celebrity endorsement,etc )  from listeners' sound quality judgements of headphones.  While these factors (apart from clamping force) don't physically affect the sound of headphones, our  previous research [1]  into blind vs. sighted listening tests revealed their cognitive influence affects listeners'  loudspeaker preferences [1], often in adverse ways. In sighted tests,  listeners were also less sensitive and  discriminating compared to blind conditions when judging different loudspeakers including their interaction with different music selections and loudspeaker positions in the room. For that reason, consumers should be dubious of loudspeaker and headphone reviews that are based solely on sighted listening.

While blind loudspeakers listening tests are possible through the addition of an acoustically-transparent- visually-opaque-curtain,  there is no simple way to hide the identity of a headphone when the listener is wearing it.  In our first headphone listening tests,  the experimenter positionally substituted the different headphones onto the listener's head from behind so that the headphone could not be visually identified. However, after a couple of trials, listeners began to identify certain headphones simply by their weight and clamping force. One of the easiest headphones for listeners to identify was the Audeze LCD-2, which was considerably heavier (522 grams) and more uncomfortable than the other headphones. The test was essentially no longer blind.

To that end, a virtual headphone method was developed whereby listeners could A/B different models of headphones that were virtualized through a single pair of headphones (the replicator headphone). Details on the method and its validation were presented at the 51st Audio Engineering Society International Conference on Loudspeakers and Headphones [2] in Helsinki, Finland in 2013.  A PDF of the slide presentation can be found  here.

Headphone virtualization is done by measuring the frequency response of the different  headphones at the DRP (eardrum reference point) using a G.R.A.S. 45 AG, and then equalizing the replicator headphone to match the measured responses of the real headphones.  In this way, listeners can make instantaneous  A/B comparisons between any number of virtualized headphones through the same headphone without the visual and tactile clues biasing their judgment. More details about the method are in the slides and AES preprint.

An important questions is: "How accurate are the virtual headphones compared to the actual headphones"?  In terms of their linear acoustic performance they are quite similar. Fig. 2 compares the  measured frequency response of the actual versus virtualized headphones.  The agreement is quite good up to 8-10 kHz above which we didn't aggressively equalize the headphones because of measurement errors and large variations related to headphone positioning both on the coupler and the listeners' head.


Fig. 2 Frequency response measurements of the6  actual versus virtualized headphones made on a  GRAS 45 AG coupler with pinna. The dotted curves are based on the physical headphone and the solid curves are from the virtual (replicator) headphone.  The measurements of the right channel of the headphone (red curves) have been offset by 10 dB from the left channels (blue curve) for visual clarify. Clicking on the picture will show a larger version.

More importantly, "Do the actual and virtual headphones sound similar"? To answer this question we performed a validation experiment where listeners evaluated 6 different headphone using both standard and virtual listening methods Listeners gave both preference and spectral balance ratings in both standard and virtual tests. For headphone preference ratings the correlation between standard and virtual test results was r = 0.85. A correlation of 1 would be perfect but 85% agreement is not bad, and hopefully more accurate than headphone ratings based on sighted evaluations. 

The differences between virtual and standard test results we believe are in part due to nuisance variables that were not perfectly controlled across the two test methods. A significant nuisance variable would likely be headphone leakage that would affect the amount of bass heard depending on the fit of the headphone on the individual listener. This would have affected the results in the standard test but not the virtual one where we used an open-back headphone that largely eliminates leakage variations across listeners.  Headphone weight and tactile cues were present in the standard test but not the virtual test, and this could in part explain the differences in results.  If these two variables could be better controlled even higher accuracy can be achieved in virtual headphone listening.

Fig.3 The mean listener preference ratings and 95% confidence intervals shown for the headphones rated using the Standard and Virtual Listening Test Methods. The Standard Method listeners evaluated the actual headphones with tactile/weigh biases and any leakage effects. In the Virtual Tests, there were no visual or tactile cues about the headphones. Note: Clicking on the picture will show a larger version.


Some additional benefits from virtual headphone testing were discovered besides eliminating sighted and psychological biases: the listening tests are faster, more efficient and more sensitive. When listeners can quickly switch and compare all of the headphones in a single trial, auditory memory is less of a factor, and they are better able to discriminate among the choices. Since this paper was written in 2013, we've improved the accuracy of the virtualization in part by developing a custom pinnae for our GRAS 45 CA that better simulates the leakage effects of headphones measured on real human subjects [3].

Finally, it's important to acknowledge what the virtual headphone method doesn't capture: 1)  non-minimum phase effects (mostly occurring at higher frequencies) and 2)  non-linear distortions that are level-dependent. The effect of these two variables on virtual headphone test method have been recently tested experimentally and will be the topic of a future blog posting. Stay tuned. 

References

[1] Floyd Toole and Sean Olive,”Hearing is Believing vs. Believing is Hearing: Blind vs. Sighted Listening Tests, and Other Interesting Things,” presented at the 97th AES Convention, preprint 3894 (1994). Download here.

[2] Sean E. 

[3] Todd Welti, "Improved Measurement of Leakage Effects for Circum-Aural and Supra-Aural Headphones," presented at the 38th AES Convention, (May 2014). Download here.




Thursday, April 21, 2011

Topics Related to Perception and Measurement of Reproduced Sound


On Tuesday, April 26th 2011, I will be giving a presentation at the meeting of the Los Angeles AES Chapter on several topics related to recent audio research at Harman International. The topics include:

I've briefly discussed these topics in Audio Musings over the past few months, and you can find summaries of them by clicking on the links above. I'll be giving an update on new findings, and briefly touch on topics not mentioned above. As a door prize, Harman will donate a free copy of Dr. Floyd Toole's book Sound Reproduction (shown on the right side bar) autographed by the author of the book.

AES members and nonmember guests are welcome to attend. The meeting will be held at the Sportmen's Lodge in Studio City. More details can be found at the Los Angeles AES website.

Friday, June 18, 2010

Some New Evidence That Generation Y May Prefer Accurate Sound Reproduction


Sound quality in mainstream music recording and reproduction is all but dead, at least according to the media reports published over the past year [1]-[6]. On the music production side, music quantity (as in volume and decibels) matters more than quality and dynamic range. Record executives and producers are forcing artists to squash the dynamics and life from their music in order to be the loudest record on the charts [5],[6]. Listening to one of these albums can induce an instant migraine, making you wonder if the record companies aren’t secretly owned by the makers of Excedrin (see slides 2-4 in this article’s accompanying PDF slide presentation or this YouTube Video ).
On the music reproduction side, convenience, portability and low cost are the purchase driving factors in this Mobile-Ipod Age of entertainment; sound quality need only be “good enough” [3]. The problem is that no one seems to be able to define what “good enough” sound quality means for Generation Y. Given that they represent the largest and youngest demographic in terms of music and audio equipment consumption, it's important to understand the attitudes and tastes of these twenty-somethings before it's too late. Getting these Millennials hooked on good sound now, means they're more likely to upgrade the audio systems in their future homes and automobiles acquired as they grow older and wealthier.

A common belief being spread by the media is that Generation Y is indifferent to sound quality, or worse, they prefer the tinny, sizzling sound of low-bit rate MP3 over higher quality lossless music formats (slide 4). This is based on an informal study conducted by a Stanford music professor, Jonathan Berger, who over a 7 year period found his students increasingly preferred music coded in lower quality lossy MP3 formats over higher quality lossless music formats [1]-[5]. “I think our human ears are fickle” says Berger. “What’s considered good or bad sound changes over time. Abnormality can become a feature” [1].


While Berger’s unpublished study raises more scientific questions than it provides answers, nonetheless it has been widely reported by the media, and has captured the attention of consumer and automotive audio marketing executives, who ultimately decide what level of sound quality is “good enough” for Generation Y (slides 4-7). There's an increased risk that sound quality may become the sacrificial lamb for products targeted at Millennials (they can’t tell the difference, after all) with the savings diverted to more salient “purchase drivers” such as industrial design, more features, advertising, and celebrity endorsements.
If someone doesn’t soon stand up for Generation Y and show some evidence that they care about sound quality, its death may become a self-fulfilling prophesy.

Some New Experiments on Generation Y Sound Quality Preferences For Music Reproduction
To this end, I recently conducted two listening experiments on a group of high school students (the younger half of Generation Y) to determine if their sound quality preferences in music reproduction were: a) consistent with those of older trained listeners used for product evaluation at Harman International, or alternatively b) indifferent or skewed towards preferring less accurate sound (slide 8).
Two research questions were asked in separate tests:
  1. Do the students prefer the sound quality of lossy MP3 (128 kbps) music reproduction over the original lossless CD version?
  2. Do the students prefer music reproduced through a more accurate loudspeaker given four different options that vary in accuracy and sound quality?
The students, who ranged in ages from 15 to 18 years, were visiting Harman on a class field trip (slide 9). A description of the listening tests and the results are summarized in the following sections.

Do High School Students Prefer Lossy MP3 Music Over Lossless CD-Quality Formats?
In the first double-blind listening experiment (slide 11), the students were presented two versions of the same program selection encoded in:
  1. MP3 (Lame 3.97, version 2.3; constant bit rate @ 128 kbps). Note that this a 2 year old MP3 encoder that may be more representative of what Berger used in his study.
  2. CD - The original lossless CD-quality version (16-bit, 44.1 kHz).
After hearing the same music several times in both MP3 and CD formats, the listeners indicated on a scoresheet which one they preferred: A or B. They were also asked to indicate the magnitude of preference (slight/moderate/strong), and provide comments describing the differences in sound quality they heard.
A total of 12 trials were completed in which preference choices were recorded using four different short program loops in three separate trials (slide 10). Three music programs and a recording of applause at a live concert were chosen based on their ability to provide audible differences between the lossy and lossless formats. The applause provided listeners a familiar acoustic signal that the author felt most listeners could easily judge based on its apparent naturalness.
The order of programs and MP3/CD formats was randomized by the listening test software to eliminate any order-related effects. Switching between A and B was performed by the test administrator via a custom Harman listening test software application. The listening test was conducted in the Harman International Reference Room, which provided a quiet, and controlled acoustic environment typical of a domestic listening room. Listening was done through a high quality, stereo playback system (JBL LSR 6336 with four JBL HB5000 subwoofers) calibrated at the listening locations. A comfortable playback level (on average 78 dB (B)) was used throughout the tests.
Two groups of nine listeners each participated in two separate listening sessions, which lasted about 30 minutes each.

Listening Test Results: Students Prefer Music in Lossless CD Versus MP3 Formats
When all 12 trials were tabulated across all listeners, the high school students preferred the lossless CD format over the MP3 version in 67% of the trials (slide 16). The CD format was preferred in 145 of 216 trials (p<0.001).
As expected, there were differences among individual students in their ability to formulate consistent preference choices (slide 17). Nearly 40% of the listeners gave a sufficient number of preference choices (9 of 12) to establish a statistically significant preference for CD (p <= 0.054). Only one of the 18 listeners preferred MP3 over CD (7 versus 5 trials), although the preference was not statistically significant ( p = 0.19). Other listeners were either guessing, and/or were inconsistent in their choices. With additional training and trials, the performance of these listeners would likely improve.
On average, the magnitude of preference for CD over MP3 was also stronger based on the frequency of responses assigned to the categories of preference: slight, moderate and strong preference (slide 18). When CD format was preferred, listeners assigned a proportionally higher number of moderate-to-strong responses compared to when MP3 was the preferred choice.
The preference for CD over MP3 formats was relatively independent of the program selection (slide 19). CD format was preferred for all four programs, with only a slight drop (68.5 % to 63%) for program JW.
Finally, the comments given by the more consistent listeners (slide 20) reveal the nature of audible differences between MP3 and CD. The CD version was often described as sounding more dynamic and brighter, with more impact on percussive sounds. MP3 versions of the programs were described as sounding duller, dynamically compressed with swirling-pitch modulation artifacts on vocal and strings.

Do High School Students Prefer Neutral/Accurate Loudspeakers?
Given that the high school students preferred the higher quality music format (CD over MP3), would their taste for accurate sound reproduction hold true when evaluating different loudspeakers? To test this question, the students participated in a double-blind loudspeaker test where they rated four different loudspeakers on an 11-point preference scale. The preference scale had semantic differentials at every second interval defined as: 1 (really dislike), 3 (dislike), 5 (neutral), 7 (like) and 9 (really like). The relative distances in ratings between pairs of loudspeakers indicated the magnitude of preference: ≥ 2 points represent a strong preference, 1 point a moderate preference and ≤ 0.5 point a slight preference.
The four loudspeakers were floor-standing the models (slide 22): Infinity Primus 362 ($500 a pair), Polk Rti10 ($800), Klipsch RF35 ($600), and Martin Logan Vista ($3800). Each loudspeaker was installed on the automated speaker shuffler in Harman International’s Multichannel Listening Lab, which positions each loudspeaker in same the location when the loudspeaker is active. In this way, the loudspeaker positional biases are removed from the test. Each loudspeaker was level-matched to within 0.1 dB at the primary listening location.
Listeners completed a series of four trials where they could compare each of the four loudspeakers reproducing a number of times before rating each loudspeaker on an 11-point preference scale. Two different music programs were used with two observations. At the beginning of each trial, the computer randomly assigned four letters (A,B,C,D) to the loudspeakers. This meant that the loudspeaker ratings in consecutive trials were more or less independent (slide 23).

Results: High School Students Prefer More Accurate, Neutral Loudspeakers
When averaged across all listeners and programs, there was moderate-strong preference for the Infinity Primus 362 loudspeaker over the other three choices (slide 25). In the results shown in the accompanying slide, as an industry courtesy, the brands of the competitors’ loudspeakers are simply identified as Loudspeakers B,C and D.
As a group, the listeners were not able to formulate preferences among the three lower rated loudspeakers B,C, and D, which were all imperfect in different ways. For an untrained listener, sorting out these different types of imperfections and assigning consistent ratings can be a difficult task without practice and training [5].
The individual listener preferences (slide 26) reveal that 13 of the 18 listeners (72%) preferred the Infinity loudspeaker based on their ratings averaged across all programs and trials.
When comparing the student's rank ordering of the loudspeakers to those of the trained Harman listeners (slide 27), we see good agreement between the two groups. The one exception is Loudspeaker C, which the trained listeners strongly disliked. The general agreement between trained and untrained listener loudspeaker preferences illustrated in this test is consistent with previous studies where a different set of listeners and loudspeakers were used [5],[6]. As found in the previous study, the trained listeners, on average, rated each loudspeaker about 1.5 preference rating lower than the untrained listeners, and the trained listeners were more discriminating and consistent in their ratings[5],[7].
The comprehensive set of anechoic measurements for each loudspeaker is compared to its preference rating (slide 28). There are clear visual correlations between the set of technical measurements and listeners’ loudspeaker preference ratings. The most preferred loudspeaker (Infinity Primus 362) had the flattest measured on-axis and listening window curves (top two curves), and the smoothest first reflection, sound power and first reflection/sound power directivity index curves (the third, fourth, fifth and sixth curves from the top). The other loudspeaker models tended to deviate from this ideal linear behavior, which resulted in lower preference ratings. Again, this relationship between loudspeaker preference and a linear frequency response is consistent with similar studies conducted by the author and Toole [9],[10].
Finally, sound quality doesn't necessarily cost more money to obtain as illustrated in these experiments. The most accurate and preferred loudspeaker - the Infinity Primus 362 - was also the least expensive loudspeaker in the group at $500 a pair. It doesn't cost any more money to make a loudspeaker sound good, as it costs to make it sound bad. In fact, the least accurate loudspeaker (Loudspeaker C) cost almost 8x more money ($3,800) than the most accurate and preferred model. Sound quality can be achieved by paying close attention to the variables that scientific research says matter, and then applying good engineering design to optimize those variables at every product price point.

Conclusions
A group of 18 high school students participated in two double-blind listening tests that measured their sound quality preferences for music reproduced in lossy (MP3 @ 128 kbps) and lossless (CD quality) formats, as well as music reproduced through loudspeakers that varied in accuracy. In both tests, the high school students preferred the most accurate option, preferring CD over MP3, and the most accurate loudspeaker over the less accurate options.
While this study is still in its early phase, these preliminary results suggest that these teenagers can reliably discriminate among different degradations in sound quality in music reproduction. When given the opportunity to hear and compare different qualities of sound reproduction, the high school students preferred the higher quality, more accurate reproduction over the lower quality choices.
The audio industry should not discount the potential opportunities to provide a higher quality audio experience to members of Generation Y. The popular belief that they don’t care about or appreciate sound quality needs to be critically reexamined. This data suggests there are opportunities to sell good sounding audio products to Generation Y as long as the products hit the right features and price points,. The audio industry should also provide these consumers the necessary education and information (i.e. meaningful performance specifications) to identify the good sounding products from the duds. Science can already do this (review slide 28), it’s simply a matter of making the information more widely available.

References
[1] Joseph Plambeck, “In Mobile Age, Sound Quality Steps Back,” New York Times, May 9, 2010.
[2] Andrew Edgecliffe-Johnson, “Could a Pair of Headphones Save the Music Business?” Financial Times, June 12 2010.
[3] Robert Capps, “The Good Enough Revolution: When Cheap and Simple Is Just Fine” Wired Magazine, August 24, 2009.
[4] Dale Dougherty, “The Sizzling Sound of Music,” O’Reilly Radar, March 1 2009.
[5] Nora Young, Full Interview: Jonathan Berger on mp3s and “Sizzle”, CBC Radio , March 24, 2009.
[6] The Loudness Wars: Why Music Sounds Worse, from All Things Considered, NPR Music, December 31, 2009.
[5] Sean E. Olive, "Differences in Performance and Preference of Trained Versus Untrained Listeners in Loudspeaker Tests: A Case Study," J. AES, Vol. 51, issue 9, pp. 806-825, September 2003. (download for free courtesy of Harman International).
[6] Sean Olive, “Part 1 - Do Untrained Listeners Prefer the Same Loudspeakers as Untrained Listeners?” Audio Musings, December 26, 2008.
[7] Sean Olive, Part 2 - Differences in Performance of Trained Versus Untrained Listeners, Audio Musings, December 27, 2008.
[8] Sean Olive, “Part 3 - Relationship between Loudspeaker Measurements and Listener Preferences”, Audio Musings, December 28, 2008.
[9] Floyd E. Toole, "Loudspeaker Measurements and Their Relationship to Listener Preferences: Part 1" J. AES Vol. 23, issue 4, pp. 227-235, April 1986. (download for free courtesy of Harman International).
[10] Floyd E. Toole, "Loudspeaker Measurements and Their Relationship to Listener Preferences: Part 2," J. AES, Vol. 34, Issue 5, pp. 323-248, May 1986. (download for free courtesy of Harman International).

Thursday, March 11, 2010

A Method For Training Listeners and Selecting Program For Listening Tests

The benefits of training listeners for subjective evaluation of reproduced sound are well documented [1]-[3]. Not only do trained listeners produce more discriminating and reliable sound quality ratings than untrained listeners, but they can report what they perceive in very precise, quantitative and meaningful terms.


One of the unexpected byproducts of listener training is that it identifies which music selections are most sensitive to distortions commonly found within the audio chain [4]. This is exactly what was found in a series of listener training experiments the author reported in a 1994 paper entitled, “A method for training listeners and selecting program material for listening tests.” The following sections summarize the findings of those early experiments, which helped establish an objective method for training and selecting listeners and program material used for listening tests at Harman International over the past 16 years. A slide presentation summarizing the paper can be downloaded here, and will be referred to throughout the following sections.


Matching the Sound of Spectral Distortions to Their Frequency Response Curve


A computer-based training task was designed where listeners were required to compare different spectral distortions added to programs and then match the frequency response curve of the filter that generated the distortion (see slides 4-5). This was repeated using eight different equalizations and twenty different music selections digitally edited into short 10-20 s loops.


The equalizations included ±3 dB shelving filters at low (100 Hz) and high (5 kHz) frequencies, and ±3 dB resonances (Q = 0.66) centered at 500 Hz and 2 kHz (slide 6). An equalized version of the program (Flat) was always provided as a reference. The twenty music selections include classical, jazz and pop/rock genres with instrumentations that varied from solo instruments, speech and small combos to rock/combos and orchestras (slide 7). Pink noise was also included since this continuous broadband signal has been found to produce the lowest detection thresholds of resonances in loudspeakers [5],[6].


Eight untrained listeners with normal hearing participated in the training exercises, which were conducted over five separate listening sessions consisting of 32 trials each (slides 8 and 9). The presentation order of the equalizations, trials, and programs were randomized to prevent any order related biases. The listener’s performance was based on the percentage of correct responses given over the course of the five training sessions.


The Results


The training results were statistically analyzed using a repeated measures analysis of variance (ANOVA) to determine the effect the different music programs, equalizations, and trials had on the listeners’ performance in correctly identifying the different equalizations (slide 11).


Listener Performance Is Strongly Influenced by Program Selection


The single largest effect on the listener’s performance was the program selection. Slide 13 plots the mean listener performance scores for each of the twenty programs averaged across all eight equalizations. The percentage of correct responses ranged from a high of 88% (pink noise) to a low of 54% (jazz piano trio). Listeners performed the task best when using broadband, spectrally dense continuous signals like pink noise or pop/rock selections like Tracy Chapman, Little Feat, and Jennifer Warnes. Listeners performed worse on programs featuring solo instruments, small combos and speech that produced more discontinuous and narrow band signals. More about this later.



Equalization Context Influences Listener Performance


The effect of equalization on listener performance was surprisingly small (slide 14). There was a tendency for listeners to correctly identify the spectral distortions that occurred at low and high frequency regions versus the midband equalizations. The explanation for this can be found by examining the interaction effect between equalization * trial, indicating that listener performance depended on which combinations of equalizations were presented within a trial. In other words, the context in which an equalization was presented influenced listener performance (slide 15). These contextual effects can be summarized as follows:


  1. Listeners gave more correct responses when the presented equalizations were more separated in frequency.
  2. Listeners gave more correct responses when presented spectral boosts versus notches; spectral notches were often confused with spectral peaks located at slightly higher frequencies.
  3. Low frequency boosts were often confused with high frequency cuts (and vice versa).
  4. Low frequency cuts were often confused with high frequency boosts (and vice versa)



Greater frequency separation between different equalizations would produce more distinctive tonal or timbral differences that would help improve identification. The second observation confirms previous research that has found spectral notches are more difficult to detect than spectral peaks of similar bandwidth [5]. The one exception is broadband dips, which have similar detection thresholds as resonance peaks with equivalent bandwidth[6]. Observations c) and d) are related to each other, and are more difficult to explain. On first glance, it seems implausible that boosts and cuts separated five octaves apart can be confused with one another. A possible explanation is that listeners are using information across the entire bandwidth to judge the perceived perceive balance of the bass and treble. In this case, the slope or shape of the spectra must be an important factor (slide 16). Since a boost or cut of similar magnitude at opposite ends of the audio bandwidth produce similar broadband shapes or slopes, this might explain why listeners might confuse the two with each other.


Program and EQ Interact to Influence Listener Performance


There was also a significant interaction between program and equalization that affected listener performance. This interaction effect was most apparent for the programs presented in training session 3 where listener performance varied significantly depending on the combination of programs and equalization presented to the listener (slide 18). It seems plausible that these differences were related to differences in the spectra of the programs, which was confirmed by plotting the average 1/3-octave spectra of the four programs (slide 19). The largest listener response errors tended to occur when the equalization fell in a frequency range where there was little spectral energy in the programs (e.g. Programs P10 (Stan Getz) and P19 (Canadian Brass)). It makes sense that listeners cannot easily analyze the spectral distortions if the program material does not contain signals that make them audible.



Not All Listeners Are Equal to the Task


No amount of training will make me eligible for the Canadian Olympic hockey team - even if I were 25 years younger. Some people simply lack the innate mental and physical raw material to perform a highly specialized task. This is also true for critical listening as illustrated by the average performance scores of eight listeners after 5 listening sessions (slide 20). The range of individual listener performances range from 82% (listener 4) to 31% (listener 3). All listeners had normal hearing. Therefore, the reason for this large inter-listener variance in performance is related to other factors such as listener motivation, attentiveness, and their listening (and general) intelligence. Training data such as this, can provide an objective quantifiable metric for selecting the best listeners for audio product evaluations.



Practice Makes Perfect


The success of any listener training task that it can lead to measurable improvement in performance with repetition. Slide 21 show shows listener performance measured over five training sessions based on the eight listeners tested. The graph shows monotonic improvement in performance from 65% correct responses to 80% after five training sessions. Additional training sessions would most likely realize further gains in performance for some subjects. In other words, the training works!



Programs With Wider and Flatter Spectrums Improve Listener Performance (Why Tracy Chapman is as Good as Pink Noise)


Spectrum analysis was performed on the different program selections to see if this could explain the strong effect of program on listener performance. The 1/3-octave spectrum of each program was plotted based on a long-term average taken over the entire length of the loop. When we looked at the spectrums of the programs it became clear that this was a significant predictor of how well listeners would perform their task.


Slide 22 plots the average spectrum of four groups of program (5 programs in each group) rank ordered (from highest to lowest) according to the listener performance scores they produced. It clearly shows that the programs with the flattest and most extended spectrums (e.g. pink noise, pop/rock, full orchestra) were better suited for identifying spectral distortions. After pink noise, Tracy Chapman (program 2 in the above graph) had among the widest and flattest spectrums measured, and along with pink noise (program 1) registered the highest listener performance scores. Programs that had narrow band spectra with limited energy above and below 500 Hz (speech, solo instruments, small jazz and classical ensembles) concentrated in group 4 were less suited for identifying spectral distortions. While these groupings had some of the most musically entertaining selections, in the end, they were not good signals for detecting and characterizing spectral distortions in audio components.



Conclusions


A listener training method has been described that teaches listeners how to identify spectral distortions according to their frequency response curve. Experimental evidence was shown indicating listeners improved their performance in this task after 5 training sessions, although not all listeners are equal in their performance.


Statistical analysis of the training data revealed that the program selections are the largest factor influencing listener performance in this task: programs with continuous broadband spectra (e.g. pink noise, Tracy Chapman,etc) provide the best signals for characterizing spectral distortions whereas programs with narrow band spectra (e.g. speech, solo instruments) provide poor signals for performing this task. Furthermore, listeners seem to confuse certain types of spectral distortions with others when the distortions presented share similarities in their frequency, bandwidth, and broadband spectral slope or shape.

Finally, it is important to remember that the training methods and programs discussed in this study focussed on perception and analysis of spectral distortions. While these types of distortions are the most dominant ones found in loudspeakers, microphones and listening rooms, there are other types of distortions for which a different set of programs are likely better suited for revealing their audibility and subjective analysis. The current Harman listener training software “How to Listen” includes training tasks on spectral distortion as well as spatial, dynamic and various types of nonlinear distortions for which we hope to discover the optimal programs for detecting and analyzing their audibility. Stay tuned.



References


  1. Olive, Sean E., "Differences in Performance and Preference of Trained Versus Untrained Listeners in Loudspeaker Tests: A Case Study,” J. Audio Eng. Soc. Vol. 51, issue 9, pp. 806-825, September 2003. Download for free here, courtesy of Harman International.
  2. Bech, Soren, “Selection and Training of Subjective for Listening Tests on Sound-Reproducing Equipment,” J. Audio Eng. Soc., vol. 40 no. 7/8 pp. 590-610 (July 1992).
  3. Toole Floyd E. "Subjective Measurements of Loudspeakers Sound Quality and Listener Performance," J. Audio Eng. Soc., vol. 33, pp. 2-32 (1985 Jan./Feb.).
  4. Olive, Sean E., “A Method for Training Listeners and Selecting Program Material for Listening Tests” presented at the 97th AES Convention, preprint 3893, (November 1994).
  5. Toole, Floyd E. and Sean E. Olive, “The Modification of Timbre by Resonances: Perception and Measurement,” J. Audio Eng. Soc., Vol. 36, pp. 122-142 (March 1998).
  6. Olive, Sean E.; Schuck, Peter L.; Ryan, James G.; Sally, Sharon L.; Bonneville, Marc E. “The Detection Thresholds of Resonances at Low Frequencies,” J. Audio Eng. Soc., Vol. 45, Issue 3, pp. 116-128 (March 1997).
  7. Olive Sean E., “Harman’s How to Listen - A New Computer-based Listener Training Program,” May 30,2009.

Friday, February 5, 2010

Evaluating the Sound Quality of Ipod Music Stations: Part 1


For many consumers, an iPod Music Docking Station may be the primary audio device through which they experience most of their recorded music and infotainment. These ubiquitous devices offer a convenient, low cost, portable and easy-to-use solution for enjoying an Ipod through loudspeakers -- but what about their sound quality? What sonic compromises are made in order to achieve this level of convenience and portability? Do certain models or brands of Ipod Music Stations offer better sound than others, and if so, how can consumers identify which ones they are? These are legitimate questions that consumers should be asking when purchasing an Ipod Music Station. Unfortunately, the answers are not readily found.


Choosing an Ipod Music Station based on sonic performance quality is a daunting task for consumers. There are dozens of models to choose from that vary in price from $80 to as high as $3000 for a model designed by Ferrari. Competent in-store demonstrations and reviews of these products are difficult to find, and the technical specifications on the packaging provide no clear indication of how good they sound. For traditional loudspeakers, it is already possible to quantify their sound quality, but the audio industry continues to withhold this information from consumers. Without meaningful performance specifications in place, consumers cannot make sound purchase decisions, nor can manufacturers be easily held accountable for delivering products that sound “ not good enough.”


This article describes a listening test method used at Harman International for evaluating the sound quality of Harman and competitors’ Ipod Music Stations. The goal is to provide subjective ratings of Ipod Music Stations that are accurate, reliable and scientifically valid. From this data, a set of technical performance specifications can be developed that quantify how good the products sound.


Designing Listening Tests For Ipod Music Stations


Fortunately, there already exists a large body of scientific knowledge on how to design accurate, reliable and valid listening tests on loudspeakers. A key ingredient is careful control of listening test nuisance variables: these are psychological, electro-acoustical and experimental factors not directly related to the product(s) under test but nonetheless influence and bias the results (click on the figure below). Some of the more significant nuisance variable controls that should be in place but often are ignored by audio manufacturers and reviewers are:

  • Double-blind conditions (this removes the effects of sighted biases related to brand, price,etc)
  • Trained listeners with normal hearing (trained listeners are up to 20 times more discriminating and reliable than untrained listeners, yet their overall sound quality preferences are similar to those of untrained listeners)
  • Quiet listening room with acoustics that are representative of average homes (important for hearing low level sounds and the quality of the loudspeaker's off-axis radiated sounds)
  • Loudness matching between products (the perception of timbre, spatial and dynamic attributes are level dependent)
  • Selection of well-recorded music selections that are revealing of sound quality differences
  • Multiple comparisons among products which are more discriminating and reliable compared to single stimulus presentations



These important nuisance variable controls are essential for obtaining accurate, reliable and valid sound quality ratings of Ipod Music Stations.



Including the Acoustical Effects of the Wall and Desktop in the Listening Test


If audio products are not tested under similar conditions for which they were designed and intended to be used, the ecological validity (as well as the external validity) of the test may be compromised: in other words, the test results will be of little value or relevance to how the product is typically used in the real world.


Most Ipod Music Stations are intended to be placed on a desktop surface or bookshelf located near a wall, which will cause acoustical reinforcement and cancellation at certain audio frequencies. Below 500 Hz, there will be a gradual increase in sound pressure level that unless compensated for in the design of the product can make vocals and bass instruments sound tubby and boomy. Diffraction effects or reflections from the desktop/bookshelf may also produce audible effects that should be included in the listening test. For these reasons, listening tests on Ipod Music Stations are best done on a desktop/wall boundary.



A Video On How We Evaluate the Sound Quality of Ipod Docking Stations


The video shown at the top of the page illustrates how Ipod Music Stations are currently evaluated in the Harman International Reference Listening Room. The acoustical properties and features of the room have been described in detail in a previous posting.


In the video you see a trained listener comparing three different Ipod Music Stations situated on our automated in-wall speaker mover configured with a removable shelf and desktop. An acoustically transparent, visually opaque screen is placed between the listener and the products under test, so that the test is double-blind (note: the term double-blind implies that neither the listener nor the experimenter know the identities of the products currently selected since the computer controls and randomly assigns the letters A/B/C to the products in each trial.)


The listener can switch between the different products at will and enter their responses via a wireless PDA equipped a custom listening test software (LTS) client application. Sound quality ratings are given on a number of different pre-defined scales that include preference, spectral balance, distortion, auditory image size.This is repeated twice using four different programs.


The PDA client communicates with the LTS server application that performs the following functions:


  • A test wizard that defines of all experimental design and setup parameters (perceptual scales, presentation of stimuli, program, randomization of test objects, playback level,etc), which are then stored in a database
  • automation and administration of the listening test and its hardware (e.g. speaker mover, media player, DSP, audio switcher)
  • collection, storage and statistical analysis of listening test data
  • real-time monitoring of listener’s performance and ratings during the test


LTS makes conducting listening tests an efficient and repeatable process by minimizing human interaction and errors in the listening test setup, storage, and analysis of the results.


Conclusions


This article has described a listening test method used for evaluating Ipod Music Stations with the goal to provide accurate, reliable and valid sound quality ratings. In Part 2, I will show some results from a recent listening test conducted on different Ipod Music Stations, followed by some different acoustical measurements of the products in Part 3. By studying the relationship between well-controlled scientific listening tests and comprehensive acoustical measurements of Ipod Music Stations, a meaningful technical specification based on sound quality can be found.