Showing posts with label loudspeaker preferences. Show all posts
Showing posts with label loudspeaker preferences. Show all posts

Friday, June 18, 2010

Some New Evidence That Generation Y May Prefer Accurate Sound Reproduction


Sound quality in mainstream music recording and reproduction is all but dead, at least according to the media reports published over the past year [1]-[6]. On the music production side, music quantity (as in volume and decibels) matters more than quality and dynamic range. Record executives and producers are forcing artists to squash the dynamics and life from their music in order to be the loudest record on the charts [5],[6]. Listening to one of these albums can induce an instant migraine, making you wonder if the record companies aren’t secretly owned by the makers of Excedrin (see slides 2-4 in this article’s accompanying PDF slide presentation or this YouTube Video ).
On the music reproduction side, convenience, portability and low cost are the purchase driving factors in this Mobile-Ipod Age of entertainment; sound quality need only be “good enough” [3]. The problem is that no one seems to be able to define what “good enough” sound quality means for Generation Y. Given that they represent the largest and youngest demographic in terms of music and audio equipment consumption, it's important to understand the attitudes and tastes of these twenty-somethings before it's too late. Getting these Millennials hooked on good sound now, means they're more likely to upgrade the audio systems in their future homes and automobiles acquired as they grow older and wealthier.

A common belief being spread by the media is that Generation Y is indifferent to sound quality, or worse, they prefer the tinny, sizzling sound of low-bit rate MP3 over higher quality lossless music formats (slide 4). This is based on an informal study conducted by a Stanford music professor, Jonathan Berger, who over a 7 year period found his students increasingly preferred music coded in lower quality lossy MP3 formats over higher quality lossless music formats [1]-[5]. “I think our human ears are fickle” says Berger. “What’s considered good or bad sound changes over time. Abnormality can become a feature” [1].


While Berger’s unpublished study raises more scientific questions than it provides answers, nonetheless it has been widely reported by the media, and has captured the attention of consumer and automotive audio marketing executives, who ultimately decide what level of sound quality is “good enough” for Generation Y (slides 4-7). There's an increased risk that sound quality may become the sacrificial lamb for products targeted at Millennials (they can’t tell the difference, after all) with the savings diverted to more salient “purchase drivers” such as industrial design, more features, advertising, and celebrity endorsements.
If someone doesn’t soon stand up for Generation Y and show some evidence that they care about sound quality, its death may become a self-fulfilling prophesy.

Some New Experiments on Generation Y Sound Quality Preferences For Music Reproduction
To this end, I recently conducted two listening experiments on a group of high school students (the younger half of Generation Y) to determine if their sound quality preferences in music reproduction were: a) consistent with those of older trained listeners used for product evaluation at Harman International, or alternatively b) indifferent or skewed towards preferring less accurate sound (slide 8).
Two research questions were asked in separate tests:
  1. Do the students prefer the sound quality of lossy MP3 (128 kbps) music reproduction over the original lossless CD version?
  2. Do the students prefer music reproduced through a more accurate loudspeaker given four different options that vary in accuracy and sound quality?
The students, who ranged in ages from 15 to 18 years, were visiting Harman on a class field trip (slide 9). A description of the listening tests and the results are summarized in the following sections.

Do High School Students Prefer Lossy MP3 Music Over Lossless CD-Quality Formats?
In the first double-blind listening experiment (slide 11), the students were presented two versions of the same program selection encoded in:
  1. MP3 (Lame 3.97, version 2.3; constant bit rate @ 128 kbps). Note that this a 2 year old MP3 encoder that may be more representative of what Berger used in his study.
  2. CD - The original lossless CD-quality version (16-bit, 44.1 kHz).
After hearing the same music several times in both MP3 and CD formats, the listeners indicated on a scoresheet which one they preferred: A or B. They were also asked to indicate the magnitude of preference (slight/moderate/strong), and provide comments describing the differences in sound quality they heard.
A total of 12 trials were completed in which preference choices were recorded using four different short program loops in three separate trials (slide 10). Three music programs and a recording of applause at a live concert were chosen based on their ability to provide audible differences between the lossy and lossless formats. The applause provided listeners a familiar acoustic signal that the author felt most listeners could easily judge based on its apparent naturalness.
The order of programs and MP3/CD formats was randomized by the listening test software to eliminate any order-related effects. Switching between A and B was performed by the test administrator via a custom Harman listening test software application. The listening test was conducted in the Harman International Reference Room, which provided a quiet, and controlled acoustic environment typical of a domestic listening room. Listening was done through a high quality, stereo playback system (JBL LSR 6336 with four JBL HB5000 subwoofers) calibrated at the listening locations. A comfortable playback level (on average 78 dB (B)) was used throughout the tests.
Two groups of nine listeners each participated in two separate listening sessions, which lasted about 30 minutes each.

Listening Test Results: Students Prefer Music in Lossless CD Versus MP3 Formats
When all 12 trials were tabulated across all listeners, the high school students preferred the lossless CD format over the MP3 version in 67% of the trials (slide 16). The CD format was preferred in 145 of 216 trials (p<0.001).
As expected, there were differences among individual students in their ability to formulate consistent preference choices (slide 17). Nearly 40% of the listeners gave a sufficient number of preference choices (9 of 12) to establish a statistically significant preference for CD (p <= 0.054). Only one of the 18 listeners preferred MP3 over CD (7 versus 5 trials), although the preference was not statistically significant ( p = 0.19). Other listeners were either guessing, and/or were inconsistent in their choices. With additional training and trials, the performance of these listeners would likely improve.
On average, the magnitude of preference for CD over MP3 was also stronger based on the frequency of responses assigned to the categories of preference: slight, moderate and strong preference (slide 18). When CD format was preferred, listeners assigned a proportionally higher number of moderate-to-strong responses compared to when MP3 was the preferred choice.
The preference for CD over MP3 formats was relatively independent of the program selection (slide 19). CD format was preferred for all four programs, with only a slight drop (68.5 % to 63%) for program JW.
Finally, the comments given by the more consistent listeners (slide 20) reveal the nature of audible differences between MP3 and CD. The CD version was often described as sounding more dynamic and brighter, with more impact on percussive sounds. MP3 versions of the programs were described as sounding duller, dynamically compressed with swirling-pitch modulation artifacts on vocal and strings.

Do High School Students Prefer Neutral/Accurate Loudspeakers?
Given that the high school students preferred the higher quality music format (CD over MP3), would their taste for accurate sound reproduction hold true when evaluating different loudspeakers? To test this question, the students participated in a double-blind loudspeaker test where they rated four different loudspeakers on an 11-point preference scale. The preference scale had semantic differentials at every second interval defined as: 1 (really dislike), 3 (dislike), 5 (neutral), 7 (like) and 9 (really like). The relative distances in ratings between pairs of loudspeakers indicated the magnitude of preference: ≥ 2 points represent a strong preference, 1 point a moderate preference and ≤ 0.5 point a slight preference.
The four loudspeakers were floor-standing the models (slide 22): Infinity Primus 362 ($500 a pair), Polk Rti10 ($800), Klipsch RF35 ($600), and Martin Logan Vista ($3800). Each loudspeaker was installed on the automated speaker shuffler in Harman International’s Multichannel Listening Lab, which positions each loudspeaker in same the location when the loudspeaker is active. In this way, the loudspeaker positional biases are removed from the test. Each loudspeaker was level-matched to within 0.1 dB at the primary listening location.
Listeners completed a series of four trials where they could compare each of the four loudspeakers reproducing a number of times before rating each loudspeaker on an 11-point preference scale. Two different music programs were used with two observations. At the beginning of each trial, the computer randomly assigned four letters (A,B,C,D) to the loudspeakers. This meant that the loudspeaker ratings in consecutive trials were more or less independent (slide 23).

Results: High School Students Prefer More Accurate, Neutral Loudspeakers
When averaged across all listeners and programs, there was moderate-strong preference for the Infinity Primus 362 loudspeaker over the other three choices (slide 25). In the results shown in the accompanying slide, as an industry courtesy, the brands of the competitors’ loudspeakers are simply identified as Loudspeakers B,C and D.
As a group, the listeners were not able to formulate preferences among the three lower rated loudspeakers B,C, and D, which were all imperfect in different ways. For an untrained listener, sorting out these different types of imperfections and assigning consistent ratings can be a difficult task without practice and training [5].
The individual listener preferences (slide 26) reveal that 13 of the 18 listeners (72%) preferred the Infinity loudspeaker based on their ratings averaged across all programs and trials.
When comparing the student's rank ordering of the loudspeakers to those of the trained Harman listeners (slide 27), we see good agreement between the two groups. The one exception is Loudspeaker C, which the trained listeners strongly disliked. The general agreement between trained and untrained listener loudspeaker preferences illustrated in this test is consistent with previous studies where a different set of listeners and loudspeakers were used [5],[6]. As found in the previous study, the trained listeners, on average, rated each loudspeaker about 1.5 preference rating lower than the untrained listeners, and the trained listeners were more discriminating and consistent in their ratings[5],[7].
The comprehensive set of anechoic measurements for each loudspeaker is compared to its preference rating (slide 28). There are clear visual correlations between the set of technical measurements and listeners’ loudspeaker preference ratings. The most preferred loudspeaker (Infinity Primus 362) had the flattest measured on-axis and listening window curves (top two curves), and the smoothest first reflection, sound power and first reflection/sound power directivity index curves (the third, fourth, fifth and sixth curves from the top). The other loudspeaker models tended to deviate from this ideal linear behavior, which resulted in lower preference ratings. Again, this relationship between loudspeaker preference and a linear frequency response is consistent with similar studies conducted by the author and Toole [9],[10].
Finally, sound quality doesn't necessarily cost more money to obtain as illustrated in these experiments. The most accurate and preferred loudspeaker - the Infinity Primus 362 - was also the least expensive loudspeaker in the group at $500 a pair. It doesn't cost any more money to make a loudspeaker sound good, as it costs to make it sound bad. In fact, the least accurate loudspeaker (Loudspeaker C) cost almost 8x more money ($3,800) than the most accurate and preferred model. Sound quality can be achieved by paying close attention to the variables that scientific research says matter, and then applying good engineering design to optimize those variables at every product price point.

Conclusions
A group of 18 high school students participated in two double-blind listening tests that measured their sound quality preferences for music reproduced in lossy (MP3 @ 128 kbps) and lossless (CD quality) formats, as well as music reproduced through loudspeakers that varied in accuracy. In both tests, the high school students preferred the most accurate option, preferring CD over MP3, and the most accurate loudspeaker over the less accurate options.
While this study is still in its early phase, these preliminary results suggest that these teenagers can reliably discriminate among different degradations in sound quality in music reproduction. When given the opportunity to hear and compare different qualities of sound reproduction, the high school students preferred the higher quality, more accurate reproduction over the lower quality choices.
The audio industry should not discount the potential opportunities to provide a higher quality audio experience to members of Generation Y. The popular belief that they don’t care about or appreciate sound quality needs to be critically reexamined. This data suggests there are opportunities to sell good sounding audio products to Generation Y as long as the products hit the right features and price points,. The audio industry should also provide these consumers the necessary education and information (i.e. meaningful performance specifications) to identify the good sounding products from the duds. Science can already do this (review slide 28), it’s simply a matter of making the information more widely available.

References
[1] Joseph Plambeck, “In Mobile Age, Sound Quality Steps Back,” New York Times, May 9, 2010.
[2] Andrew Edgecliffe-Johnson, “Could a Pair of Headphones Save the Music Business?” Financial Times, June 12 2010.
[3] Robert Capps, “The Good Enough Revolution: When Cheap and Simple Is Just Fine” Wired Magazine, August 24, 2009.
[4] Dale Dougherty, “The Sizzling Sound of Music,” O’Reilly Radar, March 1 2009.
[5] Nora Young, Full Interview: Jonathan Berger on mp3s and “Sizzle”, CBC Radio , March 24, 2009.
[6] The Loudness Wars: Why Music Sounds Worse, from All Things Considered, NPR Music, December 31, 2009.
[5] Sean E. Olive, "Differences in Performance and Preference of Trained Versus Untrained Listeners in Loudspeaker Tests: A Case Study," J. AES, Vol. 51, issue 9, pp. 806-825, September 2003. (download for free courtesy of Harman International).
[6] Sean Olive, “Part 1 - Do Untrained Listeners Prefer the Same Loudspeakers as Untrained Listeners?” Audio Musings, December 26, 2008.
[7] Sean Olive, Part 2 - Differences in Performance of Trained Versus Untrained Listeners, Audio Musings, December 27, 2008.
[8] Sean Olive, “Part 3 - Relationship between Loudspeaker Measurements and Listener Preferences”, Audio Musings, December 28, 2008.
[9] Floyd E. Toole, "Loudspeaker Measurements and Their Relationship to Listener Preferences: Part 1" J. AES Vol. 23, issue 4, pp. 227-235, April 1986. (download for free courtesy of Harman International).
[10] Floyd E. Toole, "Loudspeaker Measurements and Their Relationship to Listener Preferences: Part 2," J. AES, Vol. 34, Issue 5, pp. 323-248, May 1986. (download for free courtesy of Harman International).

Saturday, May 1, 2010

Evaluating the Sound Quality of Ipod Music Stations: Part 3 Measurements



In Part 3 of this article, the acoustical measurements of three popular Ipod Music Stations (Harman Kardon MS100, Bose SoundDock 10 and Bowers & Wilkins Zeppelin) are examined to see if they corroborate listeners’ sound quality ratings of the products based on controlled double-blind listening tests. Part 2 summarized the results of those listening tests, and Part 1 described the listening test methodology used for this research.
Throughout this article, I will refer to some slides of a presentation that can be downloaded as a PDF or viewed as a YouTube video.
Mono or Stereo Acoustical Measurements?
There is a substantial body of scientific research on the subjective and objective testing of conventional stereo loudspeakers [1]-[5]. Unfortunately, the same is not true for Ipod Music Stations: this raises several research questions about how they should be evaluated and measured.
The first important question is whether the acoustical measurements should be done in mono or stereo. Due to the proximity of the left and right channel transducer arrays in Music Stations, there is the potential for constructive and destructive interference when both channels are active that will vary according to frequency and the relative inter-channel levels and phases of the music signals. To study this phenomena, the left and right channels were measured and analyzed as both single and combined channels. Generally, we found very little difference in the frequency responses (magnitude and phase) of the left and right channels. Combining the two channels only led to the expected 6 dB increase in sound pressure level (SPL).
Anechoic Measurements of the Music Stations
Each Music Station was measured at distance of 2 meters in the large anechoic chamber at Harman International. The chamber is anechoic down to 60 Hz and this is extended to 20 Hz through a calibration procedure. Each Music Station was subjected to the same battery of measurements used for designing and testing Revel, Infinity and JBL home loudspeakers. A total of 70 frequency response measurements were taken at 10 degree increments in both horizontal and vertical orbits (slide 4). These measurements were then spatially averaged and weighted to characterize the direct, early and late reflected sounds in a typical listening room, in addition to the calculated directivity indices (slides 5-8).
The family of measurement curves (slide 9) reveal significant differences among the three Music Stations in terms of their smoothness and low frequency extension below 70 Hz.
Music Station A has the smoothest frequency response across the family of curves, which corroborates listeners’ comments about its neutral sound and absence of colorations (see slide 11 of Part 2). There is also physical evidence in the measurements that explain listener comments about Music Station A sounding a bit bright and thin, due to a combination of the upward spectral tilt in its listening window curve, and its higher low frequency cutoff.
Music Station B has even more peaks and dips in the curves that contribute to the higher frequency of listener comments regarding audible coloration. Particularly problematic is the large broad resonance at 500 Hz that is visible in both the direct and reflected sounds produced by the product. However, there is nothing in the measurements to explain listeners’ complaints about its boomy bass.
Music Station C clearly has the least tidy set of measurement curves with a significant hole centered at 2 kHz in the on-axis curve. There are visible resonances in the measurements that elicited frequent listener comments about “midrange unevenness” and “coloration.” Finally, the sound power response and directivity indices reveal that this Music Station becomes increasingly directional at higher frequencies compared to its competitors. This could contribute to coloration and dullness at off-axis listening positions and at further listening distances.
Relationship between Anechoic Measurements and Listener Preference
The anechoic measurements of the Music Stations are shown again in Slide 10 along with the listener preference ratings. From this, we see that the overall smoothness of the family of curves appeared to be important underlying factor that influenced listeners’ Music Station preference ratings.
Correlations Between Anechoic Measurements and Perceived Spectral Balance: The Direct Sound Influences the Perceived Spectral Balance Above 300 Hz
There has been a 30+ year debate in the audio community regarding which set of acoustical measurements best predict the loudspeaker’s perceived sound quality in a typical listening room. There are several different camps that include the direct sound response advocates, the sound power response advocates, the in-room measurement advocates, and others, like myself, who argue that you need a combination of all of the above measurements to accurately predict how the loudspeakers will sound in a room.
One way to tackle this debate is to study the correlation between different loudspeaker measurements and listeners’ perceived spectral balance of the loudspeakers in a room. Slide 11 shows the perceived spectral balance ratings of the Music Stations versus the family of anechoic curves that include the listening window (direct sound), first reflections and sound power response.
For Music Station A, there is good agreement between the perceived spectral balance and the listening window curve, which represents the direct sound over a ± 30 degree horizontal angle. For Music Station B, there is generally poor agreement: listeners complained about boomy bass, yet there is nothing in these measurements to suggest why. There is clearly some information missing in the anechoic measurements and/or perhaps the subjective ratings are faulty. We will come back to this topic later.
For Music Station C, there is good agreement between the perceived spectral balance and the listening window curve (direct sound), with indications that the resonances centered at 1.5 and 3.5 kHz were heard and registered by the listeners.
In summary, it seems that for at least two of the Music Stations, the perceived spectral balance can be approximated by looking at the listening window curves that represent the direct sound. However, there is information missing in the anechoic measurements that don’t explain perceptual effects below 300 Hz.
In-Room Measurements of the Music Stations
Below about 300 Hz, the room acoustics and the Music Station/listener positions can have a significant influence on the perceived quality of reproduced sound. Yet, these physical effects are not captured in the anechoic measurements described in the previous section. To further examine these effects, steady-state frequency response measurements of the Music Stations were taken at the primary listening seat at 6 different microphone positions, and then spatially averaged to remove highly localized acoustical interference effects (slide 12). The 1/6-octave smoothed curves for each Music Station are shown in slide 13. Below 200 Hz, there is evidence of room resonances (high Q peaks and dips) and boundary effects that were absent in the previous anechoic measurements (slide 9). Music Station A had less apparent boundary gain than the other two products, probably because the boundary effect was accounted for in its design.

Correlation Between In-Room Measurements and Perceived Spectral Balance: The Influence of Room and Boundary Effects Below 300 Hz
The in-room measurements are plotted in slide 13 along with listeners’ perceived spectral balance ratings. Here, the in-room measurements have been super-smoothed (1-octave) to better correspond to the frequency resolution of the subjective ratings.
Below 300 Hz, there is better agreement between the in-room measurements and listeners’ spectral ratings than observed using the anechoic measurements (slide 11). However, above 300 Hz, there is generally better agreement between the anechoic measurement and spectral ratings, particularly using the listening window curve that represents the direct sound. This confirms the important role that the direct sound plays in our perception of reproduced sound. Below 300 Hz, the room’s standing waves and boundary effects play a dominant role in the quality and quantity of bass we hear. Previous studies [5] have shown bass quality accounts for 30% of listener preference, and cannot be ignored.
Dynamic Compression Measurements
Our scientific understanding of the perception and measurement of nonlinear distortions in loudspeakers is still quite poor. There are currently no standard loudspeaker measurements that adequately capture the perceptual significance of dynamic compression and the associated distortions it produces. This is an area of audio that is in need of more research.
Listeners reported that Music Station A had fewer audible nonlinear distortions than the other two Music Stations. However, it was not clear if the distortions were real or due to a cognitive bias known as the “Halo effect.” Examining the objective distortion measurements will hopefully clarify what is real and not real.
The dynamic linearity of the Music Stations was tested by measuring their anechoic frequency response at different playback SPL’s from 76 to 100 dB SPL (@ 1 meter distance) in 6 dB increments. A relatively short length 4 s log sweep was used as a test signal to minimize the thermal effects on the transducers. Consequently, the measured dynamic compressions shown below were largely related to the behavior of the electronic limiters in the Music Stations, designed to prevent the amplifier clipping, which could otherwise potentially damage the transducers.
Slide 16 shows the dynamic compression for each Music Station. The frequency response measured at 82, 88, 94 and 100 dB SPL’s have been normalized to the 76 dB measurement. Any dynamic compression effects would be exhibited as a deviation from 0 dB. In examining these graphs, Music Station A produced 6 dB more output (100 dB @ 1 meter) than the other Music Stations without significant compression effects.
On the surface, the relationship between these measurements and listeners’ distortion ratings seems to be straightforward: the Music Stations with the higher amounts of compression received lower distortion ratings (slide 17). However, the SPL’s at which the compression effects occurred (> 94 dB) were higher than those used in the listening test.

Harmonic Distortion Measurements
Harmonic distortion (second and third harmonic only) measurements were made in the anechoic chamber at a SPL of 95 dB. The distortion levels of the harmonics are plotted along with the fundamental for each of the Music Stations in slide 18. Note that the levels of the harmonics have been raised 20 dB for the sake of clarity.
All of the Music Stations exhibited relatively high distortion at low frequencies below 100 Hz, with generally less harmonic distortion at higher frequencies. Music Station B differentiated itself by having higher levels of second and third harmonic distortion between 100 Hz to 1 kHz. Music Station C had the lowest distortion even though it received the lowest preference and distortion ratings from the listeners.
In conclusion, the harmonic distortion measurements of the Music Stations are not particularly good at predicting listeners’ distortion ratings, or overall preference in sound quality. This confirms many previous loudspeaker studies that have reported that harmonic distortion measurements are poor predictors of listeners’ overall impression of the loudspeaker. This can be explained by the fact that the distortions are often below the threshold of audibility, and the measurements themselves do not account for the masking properties of human hearing.

Conclusions
This article has shown evidence that a combination of comprehensive anechoic and in-room measurements can help explain listeners’ preferences and spectral balance ratings of the Music Stations evaluated in controlled listening tests.
Above 300 Hz, the anechoic derived listening window curve correlated well with listeners’ spectral balance ratings, whereas the in-room measurements better explained the Music Station’s acoustical interactions with the room below 300 Hz. In these particular tests, the overall smoothness of the on and off-axis frequency response curves provided the best overall indicator of listeners’ preferences and their comments.
Dynamic compression measurements revealed significant differences among the Music Stations in terms of their linear SPL output capability. The most preferred Music Station could play 6 dB louder (100 dB SPL @ 1 meter) than the other units without exhibiting significant dynamic compression. It is unlikely that this was a factor in the listening tests since the units were evaluated at a comfortable average level of 78 dB (B-weighted, slow). Finally, distortion measurements revealed some differences among the products but had no clear correlation with listeners’ sound quality ratings. This highlights the need for further research into the perception and measurement of nonlinear distortion in loudspeakers so that loudspeaker engineers can optimize their designs using psychoacoustic criteria.
References
[1] Floyd E. Toole, "Loudspeaker Measurements and Their Relationship to Listener Preferences: Part 1" J. AES Vol. 23, issue 4, pp. 227-235, April 1986. (download for free courtesy of Harman International).
[2] Floyd E. Toole, "Loudspeaker Measurements and Their Relationship to Listener Preferences: Part 2," J. AES, Vol. 34, Issue 5, pp. 323-248, May 1986. (download for free courtesy of Harman International).
[3] W. Klippel, "Multidimensional Relationship between Subjective Listening Impression and Objective Loudspeaker Parameters", Acustica 70, Heft 1, S. 45 - 54, (1990).
[4] Sean E. Olive, “A Multiple Regression Model for Predicting Loudspeaker Preference Using Objective Measurements: Part I - Listening Test Results,” presented at the 116th AES Convention, preprint 6113 (May 2004).
[5] Sean E. Olive, “A Multiple Regression Model for Predicting Loudspeaker Preference Using Objective Measurements: Part 2 - Development of the Model,” presented at the 117th AES Convention, preprint 6190 (October 2004).

Sunday, November 1, 2009

The Subjective and Objective Evaluation of Room Correction Products

In a recent article, I discussed audio’s circle of confusion that exists within the audio industry due to the lack of performance standards in the loudspeakers and rooms through which recordings are monitored. As a result, the quality and consistency of recordings remain highly variable. A significant source of variation in the playback chain occurs from acoustical interactions between the loudspeaker and room, which can produce >18 dB variations in the in-room response below 300-500 Hz.


In recent years, audio manufacturers have begun to offer so-called “room correction” products that measure the in-room response of the loudspeakers at different seating locations, and then automatically equalize them to a target curve defined by the manufacturer. The sonic benefits of these room correction products are generally not well known since, to my knowledge, no one has yet published the results of a well-controlled, double-blind listening test on room correction products. To what degree do room correction products improve or possibly degrade the sound quality of the loudspeaker/room compared to the uncorrected version of the loudspeaker/room? Can the sound quality ratings of the different room correction products be explained by acoustical measurements performed at the listening location?


A Listening Experiment on Commercial Room Correction Products


To answer these questions, we conducted some double-blind listening tests on several commercial room correction products [1]. I recently presented the results of those tests at the 127th Audio Engineering Society Convention in New York. A copy of my AES Keynote presentation can be found here.


A total of three different commercial products were compared to two versions of a Harman prototype room correction that will find its way into future Harman consumer and professional audio products. The products included the Anthem Statement D1, the Audyssey Room Equalizer, the Lyngdorf DPA1, and two versions of the Harman prototype product (see slide 7). Included in the test was a hidden anchor: the same loudspeaker and subwoofer without room correction. In this way, we could directly compare how much each room correction improved or degraded the quality of sound reproduction.


Each room correction device was tested in the Harman International Reference Room using a high quality loudspeaker (B&W 802N) and subwoofer (JBL HB5000) (slides 8 and 9). A calibration was performed for each room correction over the six listening seats according to the manufacturer’s instructions. Two different calibrations were performed with the Harman prototype: one based on a multipoint six-seat average, while the second calibration used a six-microphone spatial average focused on the primary listening seat. The different room corrections were level matched for equal loudness at the listening seat.


The Listener's Task


A total of eight trained listeners with normal hearing participated in the tests. Using a multiple comparison method, the listener could switch at will between the six different room corrections, and rate them according to overall preference, spectral balance, as well as give comments (see slide 14). The administration of the test, including the design, switching, collection and storage of listener responses, was computer automated via Harman’s proprietary Listening Test Software. A total of nine trials were completed using three different programs repeated three times. The presentation order of the program and room corrections was randomized.



Results: Significant Preferences For Different Room Corrections


The mean preference ratings and 95% confidence intervals are shown above in Figure 1 (or slide 17). The room correction products are coded from R1 through R6 in descending order of preference. The identities of the products associated with the results are not relevant for the purpose of this article. Three of the five room corrections (RC1-RC3) were strongly preferred over no room correction (RC4). However, one of the room corrections (RC5) was equally rated to the no correction treatment (RC4), and one of the room corrections (RC6) was rated much worse. Overall, the sound quality of R6 was rated "very poor" based on the semantic definitions of the preference scale.


Perceived Spectral Balance of Room Corrections


Listeners rated the perceived spectral balance of each room correction across seven equal logarithmically spaced frequency bands. The mean spectral balance ratings averaged across all listeners and programs are shown in slide 18. The more preferred room corrections were perceived to have a flatter, smoother spectral balance with extended bass. The less preferred room correction products (R5 and R6) were perceived to have too little bass, which made them sound thin and bright.



Listener Comments on Room Corrections


Listeners also gave comments related to the spectral balance of the different room correction products. Slide 19 shows the number of times a particular comment was used to describe each room correction. The bottom row indicates the correlation between preference rating and the frequency of the comment. The most preferred room corrections were described as "neutral" and "full," which corresponded to flatter, smoother and more bass extended spectral balance ratings. The least preferred room corrections (R4-R6) were described as colored, harsh, thin, and muffled, which corresponded to less flat, less smooth, and less bass extended spectral balance ratings. Slide 20 graphically illustrates the same information in slide 19.


Correlation Between Subjective and Objective Measurements


In-room acoustical measurements were made at the six listening seats using a proprietary 12-channel audio measurement system developed by the Harman R&D Group. Slides 23 and 24 show the amplitude response of the different room corrections spatially averaged for the six seats (slide 23), and at the primary listening seat (slide 24). The measurements are plotted from top to bottom in descending order of preference, each vertically offset to more clearly delineate the differences. A few observations can be made:


  1. The six-seat spatially averaged curves (slide 23) of the room corrections do not explain listeners' room correction preferences as well as the spatially averaged curves taken at the primary seat (slide 24). This makes perfect sense since all of the listening was done in the primary listening seat.
  2. Looking at slide 24, the most preferred room corrections produced the smoothest, most extended amplitude responses measured at the primary listening seat. The largest measured differences among the different room corrections occur below 100 Hz and around 2 kHz where the loudspeaker had a significant hole in its sound power response. The room corrections that were able to fill in this sound power dip received higher preference and spectral balance ratings.
  3. A flat in-room target response is clearly not the optimal target curve for room equalization. The preferred room corrections have a target response that has a smooth downward slope with increasing frequency. This tells us that listeners prefer a certain amount of natural room gain. Removing the rom gain, makes the reproduced music sound unnatural, and too thin, according to these listeners. This also makes perfect sense since the recording was likely mixed in room where the room gain was also not removed; therefore, to remove it from the consumers' listening room would destroy spectral balance of the music as intended by the artist.


Conclusions


There are significant differences in the subjective and objective performance of current commercial room correction products as illustrated in these listening test results. When done properly, room correction can lead to significant improvements in the overall quality of sound reproduction. However, not all room correction products are equal, and two of the tested products produced results that were no better, or much worse, than the unequalized loudspeaker. Room correction preferences are strongly correlated to their perceived spectral balance and related attributes (coloration, full/thin, bright/dull). The most preferred room corrections produced the smoothest, most extended in-room responses measured around the primary listening seat.


More tests are underway to better understand and, if necessary, optimize the performance of Harman's room correction algorithms for different acoustical aspects of the room and loudspeaker.


References


[1] Sean E. Olive, John Jackson, Allan Devantier, David Hunt, and Sean Hess, “The Subjective and Objective Evaluation of Room Correction Products,” presented at the 127th AES Convention, New York, preprint 7960 (October 2009).