Showing posts with label listening tests. Show all posts
Showing posts with label listening tests. Show all posts

Saturday, November 7, 2015

Factors that Influence Listeners’ Preferred Bass and Treble Levels in Headphones

Most people would agree that headphone purchase decisions are heavily influenced by the brand and styling (size,  weight, color, quality of  materials).   But what is considered stylish and fashionable  by me is not shared by my 15-year old daughter (this week donning purple hair), and vice versa. In other words, the perceived visual aesthetic  of the headphone  is really in the the eyes and mind of the beholder, and this can vary with age, gender, culture, and other demographic category. 

But what about sound quality?  To what extent does the consumer's  age, gender, culture and prior listening experience influence their taste in headphone sound quality?  Is there a scientific basis for headphone manufacturers to design headphones that have different amounts of bass and treble aimed to satisfy the tastes of a targeted demographic group? 

To answer this question, we recently conducted a  study on factors that influence listeners’ preferred bass and treble balance in headphone sound reproduction. Using a method of adjustment a total of 249 listeners adjusted the relative treble and bass levels of a headphone that was first equalized at the eardrum reference point (DRP) to match the in-room steady-state response of a reference loudspeaker in a reference listening room. Listeners repeated the adjustment five times using three stereo music programs. The listeners included males and females from different age groups, listening experiences, and nationalities (Canada, USA, Germany and China).  The results provide evidence that the preferred bass and treble balances in headphones was influenced by several factors including program, and the listeners’ age, gender, and prior listening experience. The younger and less experienced listeners on average preferred more bass and treble in their headphones compared to the older, more experienced listeners. Female listeners on average preferred about 1 dB bass and 2 dB treble than their male counterparts. Listeners over 55 years preferred less bass and more treble than the younger listeners suggested that they were compensating for possible hearing loss that is associated with increased age.



We recently presented the results of this study at the 139th Audio Engineering Society Convention in New York City, October 29th-November 1, 2015. The paper is available for download in AES e-library. A PDF copy of the presentation can be found here. Or you can view an animated version of the presentation on Youtube.

Monday, July 1, 2013

Harman Researchers Make Important Headway in Understanding Headphone Response

Todd Welti, Sean Olive and Elisabeth McMullin are shown above with their custom binaural mannequin, "Sidney" wearing a pair of AKG K1000's. No fit or leakage issues with these headphones.
Tyll Hertsens, Chief Editor at Innerfidelity recently visited our research labs in Northridge, and wrote a nice story in his blog about our headphone research and visit to Harman. You can read the entire story here.

In his story, Tyll summarizes three of our recent AES papers on headphones, the first one of which I already wrote about in this blog. I hope to write summaries of the other two papers in the upcoming weeks when I can find some free time.

Monday, April 22, 2013

The Relationship between Perception and Measurement of Headphone Sound Quality

Above: The brands and models of six popular headphones used in this study.

In many ways, our scientific understanding of the perception and measurement of headphone sound quality is 30  years behind our knowledge of loudspeakers. Over the past three decades, loudspeaker scientists have developed controlled listening test methods that provide accurate and reliable measures of   listeners' loudspeaker preferences, and their underlying sound quality attributes.  From the perceptual data, a set of acoustical loudspeaker measurements has been identified from which we can model and predict listeners' loudspeaker preference ratings with about 86% accuracy.

In contrast to loudspeakers, headphone research is still in its infancy. Looking at published acoustical measurements of  headphones you will discover there is little consensus among brands (or even within the same brand) on how a headphone should sound and measure [1]. There exists too few published studies based on controlled headphone listening tests to identify which objective measurements and target response curves produce an optimal sound quality. Controlled, double-blind comparative  subjective evaluations of different headphones present significant logistical challenges to the researcher that include controlling headphone tactile and visual biases. Sighted biases related to price, brand, and cosmetics have been shown to significantly bias listeners judgements of loudspeaker sound quality. Therefore, these nuisance variables must be controlled in order to obtain accurate assessments of headphone sound quality.

Todd Welti and I recently conducted a study to explore the relationship between the perception and measurement of headphone sound quality. The results were presented at the 133rd AES Convention in San Francisco,  in October 2012.  A PDF of the slide presentation referred to below can be found here. The AES preprint can be found in the AES E-library. The results of this study are summarized below.

Measuring The Perceived Sound Quality of Headphones

Double-blind comparative listening tests were performed on six popular circumaural headphones ranging in price from $200 to $1000 (see above slide).  The listening tests were carefully designed to minimize biases from known listening test nuisance variables (slides 7-13). A panel of 10 trained listeners rated each headphone based on overall preferred sound quality, perceived spectral balance, and comfort. The listeners also gave comments on the perceived timbral, spatial, dynamic attributes of the headphones to help explain their underlying sound quality preferences.

The headphones were compared four at a time over three listening sessions (slide 12).  Assessments were made using three music programs with one repeat to establish the reliability of the listeners' ratings.  The  order of headphone presentations, programs and listening sessions were randomized to minimize learning and order-related biases. The test administrator manually substituted the different headphones on the listener from behind so they were not aware of the headphone brand, model or appearance during the test  (slide 8).  However, tactile/comfort differences were part of the test.  Listeners could adjust the position of the headphones on their heads via light weight plastic handles attached to the headphones.

Listeners Prefer Headphones With An Accurate, Neutral Spectral Balance

When the listening test results were statistically analyzed, the main effect on the preference rating was  due to the different headphones (slide 15).  The  preferred headphone models were perceived as having the most neutral, even spectral balance (slide 19) with the less preferred models having too much or too little energy in the bass, midrange or treble regions.  Frequency analysis of listeners' comments confirmed listeners' spectral balance ratings of the headphones, and proved to be a good predictor of overall preference (slide 20). The most preferred headphones were frequently described as "good spectral balance, neutral with low coloration, and good bass extension," whereas the less preferred models were frequently described as "dull, colored, boomy, and lacking midrange".

Looking at the individual listener preferences, we found good agreement among listeners in terms of which models they liked and disliked (slides 16 and 18). Some of the most commercially successful models were among the least preferred headphones in terms of sound quality. In cases where an individual listener had poor agreement with the overall listening panel's headphone preferences, we found either the listener didn't understand the task (they were less trained),  or the headphone didn't properly fit the listener, thus causing air leaks and poor bass response; this was later confirmed by doing in-ear measurements of the headphone(s) on individual listeners (slides 26-39).

Measuring the Acoustical Performance of Headphones

Acoustical measurements were made on each headphone using a GRAS 43AG Ear and Cheek simulator equipped with an IEC 711 coupler (slide 24). The measurement device is intended to simulate the acoustical effects of an average human ear including the acoustical interactions between the headphone and the acoustical impedance of the ear.  The headphone measurements shown below include these interactions as well as the transfer function of the ear, mostly visible in the graphs as a ~10 dB peak at around 3 kHz.  It is important to note that we since we are born with these ear canal resonances, we have adapted to them and don't "hear" them as colorations.

Relationship between Subjective and Objective Measurements 

Comparing the acoustical measurements of the headphones to their perceived spectral balance confirms that the more preferred headphones generally have a smooth and extended response below 1 kHz that is perceived as an ideal spectral balance (slide 25). The least preferred headphones  (HP5 and HP6)   have the most uneven measured and perceived frequency responses below 1 kHz, which generated listener comments such as "colored, boomy and muffled."  The measured frequency response of HP4 shows a slight bass boost below 200 Hz, yet on average it was perceived as sounding thin; this headphone was one of the models that had bass leakage problems for some listeners due to a poor seal on their ears.

Above: The left and right channel frequency response measurements of each headphone are shown above the  mean preference rating and 95% confidence interval it received in blind listening tests. The dotted green response on each graph shows the "perceived spectral balance" based on the listeners' responses.


Conclusions

In conclusion, this headphone study is one of the first of its kind to report results based on controlled, double-blind listening tests [2]. The results provide evidence that trained listeners preferred the headphones perceived to have the most neutral, spectral balance. The acoustical measurements of the headphone generally confirmed and predicted which headphones listeners preferred. We also found that bass leakage related to the quality of fit and seal of the headphone to the listeners'  head/ears can be a significant nuisance variable in subjective and objective measurements of headphone sound quality.

It is important for the reader not to draw generalizations from these results beyond the conditions we tested. One audio writer has already questioned whether headphone sound quality preferences of trained listeners can be extrapolated to tastes of untrained younger demographics whose apparent appetite for bass-heavy headphones might indicate otherwise. We don't know the answer to this question. For younger consumers, headphone purchases may be  driven more by fashion trends and marketing B.S. (Before Science) than sound quality.  While this question is the focus of future research, the preliminary data suggests  in blind A/B comparisons kids pref headphones with accurate reproduction to colored, bass-heavy alternatives.  This would tend to confirm findings from previous investigations into loudspeaker preferences of high school and college students (both Japanese and American) that so far indicates most listeners prefer accurate  sound reproduction regardless of age, listener training or culture.

Future headphone research may tell us (or not) that most people prefer accurate sound reproduction regardless of whether the loudspeakers are installed in the living room, the automobile, or strapped onto the sides of their head.  It makes perfect sense, at least to me. Only then will listeners hear the truth --  music reproduced as the artist intended.
________________________________

Footnotes
[1] Despite the paucity of good subjective measurements on headphones there does exist some online resources where you can find objective measurements on headphones. You will be hard pressed to find a manufacturer who will supply these measurements of their products. The resources include Headroom.com, Sound & Vision Magazine, and InnerFidelity.com.  Tyll Hertsens at InnerFidelity  has a large database of frequency response measurements of headphones that clearly illustrate the lack of consensus among manufacturers on how a headphone should sound and measure. There is even a lack of consistency among different models made by the same brand.

[2]  Sadly, studies like this present one are so uncommon in our industry that Sound and Vision Magazine  recently declared this paper as the biggest audio story in 2012. Hopefully that will change sooner than later.

Saturday, May 1, 2010

Evaluating the Sound Quality of Ipod Music Stations: Part 3 Measurements



In Part 3 of this article, the acoustical measurements of three popular Ipod Music Stations (Harman Kardon MS100, Bose SoundDock 10 and Bowers & Wilkins Zeppelin) are examined to see if they corroborate listeners’ sound quality ratings of the products based on controlled double-blind listening tests. Part 2 summarized the results of those listening tests, and Part 1 described the listening test methodology used for this research.
Throughout this article, I will refer to some slides of a presentation that can be downloaded as a PDF or viewed as a YouTube video.
Mono or Stereo Acoustical Measurements?
There is a substantial body of scientific research on the subjective and objective testing of conventional stereo loudspeakers [1]-[5]. Unfortunately, the same is not true for Ipod Music Stations: this raises several research questions about how they should be evaluated and measured.
The first important question is whether the acoustical measurements should be done in mono or stereo. Due to the proximity of the left and right channel transducer arrays in Music Stations, there is the potential for constructive and destructive interference when both channels are active that will vary according to frequency and the relative inter-channel levels and phases of the music signals. To study this phenomena, the left and right channels were measured and analyzed as both single and combined channels. Generally, we found very little difference in the frequency responses (magnitude and phase) of the left and right channels. Combining the two channels only led to the expected 6 dB increase in sound pressure level (SPL).
Anechoic Measurements of the Music Stations
Each Music Station was measured at distance of 2 meters in the large anechoic chamber at Harman International. The chamber is anechoic down to 60 Hz and this is extended to 20 Hz through a calibration procedure. Each Music Station was subjected to the same battery of measurements used for designing and testing Revel, Infinity and JBL home loudspeakers. A total of 70 frequency response measurements were taken at 10 degree increments in both horizontal and vertical orbits (slide 4). These measurements were then spatially averaged and weighted to characterize the direct, early and late reflected sounds in a typical listening room, in addition to the calculated directivity indices (slides 5-8).
The family of measurement curves (slide 9) reveal significant differences among the three Music Stations in terms of their smoothness and low frequency extension below 70 Hz.
Music Station A has the smoothest frequency response across the family of curves, which corroborates listeners’ comments about its neutral sound and absence of colorations (see slide 11 of Part 2). There is also physical evidence in the measurements that explain listener comments about Music Station A sounding a bit bright and thin, due to a combination of the upward spectral tilt in its listening window curve, and its higher low frequency cutoff.
Music Station B has even more peaks and dips in the curves that contribute to the higher frequency of listener comments regarding audible coloration. Particularly problematic is the large broad resonance at 500 Hz that is visible in both the direct and reflected sounds produced by the product. However, there is nothing in the measurements to explain listeners’ complaints about its boomy bass.
Music Station C clearly has the least tidy set of measurement curves with a significant hole centered at 2 kHz in the on-axis curve. There are visible resonances in the measurements that elicited frequent listener comments about “midrange unevenness” and “coloration.” Finally, the sound power response and directivity indices reveal that this Music Station becomes increasingly directional at higher frequencies compared to its competitors. This could contribute to coloration and dullness at off-axis listening positions and at further listening distances.
Relationship between Anechoic Measurements and Listener Preference
The anechoic measurements of the Music Stations are shown again in Slide 10 along with the listener preference ratings. From this, we see that the overall smoothness of the family of curves appeared to be important underlying factor that influenced listeners’ Music Station preference ratings.
Correlations Between Anechoic Measurements and Perceived Spectral Balance: The Direct Sound Influences the Perceived Spectral Balance Above 300 Hz
There has been a 30+ year debate in the audio community regarding which set of acoustical measurements best predict the loudspeaker’s perceived sound quality in a typical listening room. There are several different camps that include the direct sound response advocates, the sound power response advocates, the in-room measurement advocates, and others, like myself, who argue that you need a combination of all of the above measurements to accurately predict how the loudspeakers will sound in a room.
One way to tackle this debate is to study the correlation between different loudspeaker measurements and listeners’ perceived spectral balance of the loudspeakers in a room. Slide 11 shows the perceived spectral balance ratings of the Music Stations versus the family of anechoic curves that include the listening window (direct sound), first reflections and sound power response.
For Music Station A, there is good agreement between the perceived spectral balance and the listening window curve, which represents the direct sound over a ± 30 degree horizontal angle. For Music Station B, there is generally poor agreement: listeners complained about boomy bass, yet there is nothing in these measurements to suggest why. There is clearly some information missing in the anechoic measurements and/or perhaps the subjective ratings are faulty. We will come back to this topic later.
For Music Station C, there is good agreement between the perceived spectral balance and the listening window curve (direct sound), with indications that the resonances centered at 1.5 and 3.5 kHz were heard and registered by the listeners.
In summary, it seems that for at least two of the Music Stations, the perceived spectral balance can be approximated by looking at the listening window curves that represent the direct sound. However, there is information missing in the anechoic measurements that don’t explain perceptual effects below 300 Hz.
In-Room Measurements of the Music Stations
Below about 300 Hz, the room acoustics and the Music Station/listener positions can have a significant influence on the perceived quality of reproduced sound. Yet, these physical effects are not captured in the anechoic measurements described in the previous section. To further examine these effects, steady-state frequency response measurements of the Music Stations were taken at the primary listening seat at 6 different microphone positions, and then spatially averaged to remove highly localized acoustical interference effects (slide 12). The 1/6-octave smoothed curves for each Music Station are shown in slide 13. Below 200 Hz, there is evidence of room resonances (high Q peaks and dips) and boundary effects that were absent in the previous anechoic measurements (slide 9). Music Station A had less apparent boundary gain than the other two products, probably because the boundary effect was accounted for in its design.

Correlation Between In-Room Measurements and Perceived Spectral Balance: The Influence of Room and Boundary Effects Below 300 Hz
The in-room measurements are plotted in slide 13 along with listeners’ perceived spectral balance ratings. Here, the in-room measurements have been super-smoothed (1-octave) to better correspond to the frequency resolution of the subjective ratings.
Below 300 Hz, there is better agreement between the in-room measurements and listeners’ spectral ratings than observed using the anechoic measurements (slide 11). However, above 300 Hz, there is generally better agreement between the anechoic measurement and spectral ratings, particularly using the listening window curve that represents the direct sound. This confirms the important role that the direct sound plays in our perception of reproduced sound. Below 300 Hz, the room’s standing waves and boundary effects play a dominant role in the quality and quantity of bass we hear. Previous studies [5] have shown bass quality accounts for 30% of listener preference, and cannot be ignored.
Dynamic Compression Measurements
Our scientific understanding of the perception and measurement of nonlinear distortions in loudspeakers is still quite poor. There are currently no standard loudspeaker measurements that adequately capture the perceptual significance of dynamic compression and the associated distortions it produces. This is an area of audio that is in need of more research.
Listeners reported that Music Station A had fewer audible nonlinear distortions than the other two Music Stations. However, it was not clear if the distortions were real or due to a cognitive bias known as the “Halo effect.” Examining the objective distortion measurements will hopefully clarify what is real and not real.
The dynamic linearity of the Music Stations was tested by measuring their anechoic frequency response at different playback SPL’s from 76 to 100 dB SPL (@ 1 meter distance) in 6 dB increments. A relatively short length 4 s log sweep was used as a test signal to minimize the thermal effects on the transducers. Consequently, the measured dynamic compressions shown below were largely related to the behavior of the electronic limiters in the Music Stations, designed to prevent the amplifier clipping, which could otherwise potentially damage the transducers.
Slide 16 shows the dynamic compression for each Music Station. The frequency response measured at 82, 88, 94 and 100 dB SPL’s have been normalized to the 76 dB measurement. Any dynamic compression effects would be exhibited as a deviation from 0 dB. In examining these graphs, Music Station A produced 6 dB more output (100 dB @ 1 meter) than the other Music Stations without significant compression effects.
On the surface, the relationship between these measurements and listeners’ distortion ratings seems to be straightforward: the Music Stations with the higher amounts of compression received lower distortion ratings (slide 17). However, the SPL’s at which the compression effects occurred (> 94 dB) were higher than those used in the listening test.

Harmonic Distortion Measurements
Harmonic distortion (second and third harmonic only) measurements were made in the anechoic chamber at a SPL of 95 dB. The distortion levels of the harmonics are plotted along with the fundamental for each of the Music Stations in slide 18. Note that the levels of the harmonics have been raised 20 dB for the sake of clarity.
All of the Music Stations exhibited relatively high distortion at low frequencies below 100 Hz, with generally less harmonic distortion at higher frequencies. Music Station B differentiated itself by having higher levels of second and third harmonic distortion between 100 Hz to 1 kHz. Music Station C had the lowest distortion even though it received the lowest preference and distortion ratings from the listeners.
In conclusion, the harmonic distortion measurements of the Music Stations are not particularly good at predicting listeners’ distortion ratings, or overall preference in sound quality. This confirms many previous loudspeaker studies that have reported that harmonic distortion measurements are poor predictors of listeners’ overall impression of the loudspeaker. This can be explained by the fact that the distortions are often below the threshold of audibility, and the measurements themselves do not account for the masking properties of human hearing.

Conclusions
This article has shown evidence that a combination of comprehensive anechoic and in-room measurements can help explain listeners’ preferences and spectral balance ratings of the Music Stations evaluated in controlled listening tests.
Above 300 Hz, the anechoic derived listening window curve correlated well with listeners’ spectral balance ratings, whereas the in-room measurements better explained the Music Station’s acoustical interactions with the room below 300 Hz. In these particular tests, the overall smoothness of the on and off-axis frequency response curves provided the best overall indicator of listeners’ preferences and their comments.
Dynamic compression measurements revealed significant differences among the Music Stations in terms of their linear SPL output capability. The most preferred Music Station could play 6 dB louder (100 dB SPL @ 1 meter) than the other units without exhibiting significant dynamic compression. It is unlikely that this was a factor in the listening tests since the units were evaluated at a comfortable average level of 78 dB (B-weighted, slow). Finally, distortion measurements revealed some differences among the products but had no clear correlation with listeners’ sound quality ratings. This highlights the need for further research into the perception and measurement of nonlinear distortion in loudspeakers so that loudspeaker engineers can optimize their designs using psychoacoustic criteria.
References
[1] Floyd E. Toole, "Loudspeaker Measurements and Their Relationship to Listener Preferences: Part 1" J. AES Vol. 23, issue 4, pp. 227-235, April 1986. (download for free courtesy of Harman International).
[2] Floyd E. Toole, "Loudspeaker Measurements and Their Relationship to Listener Preferences: Part 2," J. AES, Vol. 34, Issue 5, pp. 323-248, May 1986. (download for free courtesy of Harman International).
[3] W. Klippel, "Multidimensional Relationship between Subjective Listening Impression and Objective Loudspeaker Parameters", Acustica 70, Heft 1, S. 45 - 54, (1990).
[4] Sean E. Olive, “A Multiple Regression Model for Predicting Loudspeaker Preference Using Objective Measurements: Part I - Listening Test Results,” presented at the 116th AES Convention, preprint 6113 (May 2004).
[5] Sean E. Olive, “A Multiple Regression Model for Predicting Loudspeaker Preference Using Objective Measurements: Part 2 - Development of the Model,” presented at the 117th AES Convention, preprint 6190 (October 2004).

Saturday, April 10, 2010

Evaluating the Sound Quality of Ipod Music Stations: Part 2 Listening Tests


Part 1 of this article described a listening test method used at Harman International for evaluating the sound quality of Ipod Music Docking Stations. In part 2, I present the results of a recent competitive benchmarking listening test where three popular Music Stations of comparable price were evaluated by a panel of trained listeners. Were listeners able to reliably formulate a preference among the different Ipod Music Stations using this test method? And what were the underlying sound quality attributes that explain these preferences? Read on to find out.

Throughout this article, I will refer to slides in an accompanying PDF presentation, or you can watch a YouTube video of the presentation.


The Products Tested
A listening test was performed on three Ipod Music Stations that retail for the same approximate price of $599: the Harman Kardon MS 100, the Bose SoundDock 10, and the Bowers & Wilkins Zeppelin (see slide 2). All three products provide Ipod docking playback capability and an auxiliary input for external sources such as CD player,etc. The latter was used in these tests to reproduce a CD-quality stereo test signal fed from a digital sound source.


Listening Test Method
The Music Stations were evaluated in the Harman International Reference Listening Room (slide 4) described in detail in a previous blog posting. Each Music Station was positioned on a shelf attached to the Harman automated in-wall speaker mover, which provides the means for rapid multiple comparisons among three products designed to be used in, on, or near a wall boundary. The music stations were level-matched within 0.1 dB at the listening position by playing pink noise through each unit and adjusting the acoustic output level to produce the same loudness measured via the CRC stereo loudness meter .

All tests were performed double-blind with the identities of the products hidden via an acoustically transparent, but visually opaque screen. The listening panel consisted of 7 trained listeners with normal audiometric hearing. Each listener sat in the same seat situated on-axis to the Music Stations positioned at seated ear height, approximately 11 feet away (slide 5).

The Music Stations were evaluated using a multiple comparison (A/B/C) protocol whereby listeners could switch at will between the three products before entering their final comments and ratings based on overall preference, distortion, and spectral balance. This was repeated using four different stereo music programs with one repeat (4 programs x 2 observations = 8 trials). In total, each listener provided 216 ratings, in addition to their comments. The typical length of the test was between 30-40 minutes. The presentation order of the music programs and Music Stations were randomized by the Harman Listening Test software to minimize any order-related biases in the results.


Results: Overall Preference Ratings For the Music Stations
A repeated measures analysis of variance was used to statistically establish the effects and interactions between the independent variables and the different sound quality ratings. The main effect was related to the Music Stations with no significant effects or interactions observed between the program material and Music Stations. Note that in the following discussion, the brands/models of the Music Stations have removed from the results since this information is not relevant to the primary purpose of the research and this article. Instead, the Music Station products have been assigned the letters A,B and C in descending order according to their mean overall preference rating.

The mean preference ratings and upper 95% confidence intervals based on the 7 listeners are plotted in slide 7. Music Station A received a preference rating of 6.8, and was strongly preferred over the Music Stations B (4.58) and C (4.08).


Individual Listener Preference
The individual listener preference ratings and upper 95% confidence intervals are plotted in slide 8. The intra and inter listener reliability in ratings were generally quite high. All seven listeners rated Music Station A higher than the other two products, although some listeners, notably 55 and 64, were less discriminating and reliable than other the listeners. Both these listeners had significantly less training and experience than the other listeners, which has been demonstrated in previous studies to be an important factor in listener performance.


Distortion Ratings
Nonlinear distortion includes audible buzzes, rattles, noise and other level-dependent distortions related to the performance of the electronics, transducers, and mechanical integrity of the product’s enclosure. In these tests, the average playback level was held constant (78 dB(B) slow), and listeners could not adjust it up or down. Under these test conditions, some listeners still felt there were audible differences in distortion (slide 9) with Music Station A (distortion rating = 7.19) having less distortion than Music Stations B (5.5) and C (4.94).

Some of these differences in subjective distortion ratings could be related to a “Halo Effect," a scaling bias wherein listeners tend to rate the distortion of loudspeakers according to their overall preference ratings - even when the distortion is not audible. An example of “Halo Effect” bias has been noted in a previous loudspeaker study by the author [1]. Reliable and accurate quantification of nonlinear distortion in perceptually meaningful terms remains problematic until better subjective and objective measurements are developed.


Spectral Balance Ratings
Listeners rated the spectral balance of each Music Station across seven equally log-spaced frequency bands using a ± 5-point scale. A rating of 0 indicates an ideal spectral balance, positive numbers indicate too much emphasis within the frequency band, and negative numbers indicate a deemphasis within the frequency band. Rating the spectral balance of an audio component is a highly specialized task that requires skill and practice acquired through using Harman’s “How to Listen” listener training software application. In a previous study [1], it has been shown that spectral balance ratings are closely related to the measured anechoic listening window of the loudspeaker, although may vary with changes in the directivity and the ratio of direct/reflected sound at the listening location.

The mean spectral balance ratings averaged across all programs and listeners are plotted in slide 10. Listeners felt Music Station A had the flattest or most ideal spectral balance, with the exception of a need for more upper/lower bass, and less emphasis in the upper treble. Music Station B was judged to have too much emphasis in the upper bass (88 Hz), and too little emphasis in the upper midrange/treble. Music Station C was rated to have a slight overemphasis in the upper bass, and a very uneven balance throughout the midrange with a peak centered around 1700 Hz.


Listener Comments
Listeners provided comments that described the audible difference among three Music Stations. The frequency or number of times a specific comment was used to describe each product is summarized in slide 11. The correlation between the product’s preference rating and each descriptor is indicated by correlation coefficient (r) shown in the bottom row of the table. The same table data shown in slide 11 are plotted in graphical form in slide 12.

The most common three descriptors applied to the Music Station A were neutral (16), bright (9), and thin (9). These descriptors generally confirm the perceived mean spectral balance ratings summarized in slide 10.

The three most frequent descriptors applied to Music Station B were colored (13), boomy bass (10), and uneven mids(6). The “boomy bass” is clearly suggested in spectral balance ratings (see the large 88 Hz peak) in slide 10.

The three most frequent descriptors used to describe the sound quality of Music Station C were colored (19), uneven mids (9), and harsh (6). All three descriptors have a high negative correlation with the overall preference rating, and may explain the low preference rating this product received. The coloration and unevenness of the midrange are confirmed in the spectral balance rating in slide 10. The harshness is most likely related to the perceived spectral peak perceived around 1700 Hz.


Conclusions
This article summarized the results of a controlled, double-blind listening test performed on three comparatively priced Ipod Music Stations using seven trained listeners with normal hearing. The results provide evidence that the sound quality of Music Station A was strongly preferred over Music Stations B and C. There was strong consensus among all seven listeners who rated Music Station A highest overall. The Music Station preference ratings can be largely explained by examining the perceived spectral balance ratings of the products, which are in turn closely related to listener comments on the sound quality of the products.

The most preferred product, Music Station A, was perceived to have the flattest, most ideal spectral balance, and solicited frequent comments to its neutral sound quality. As the spectral balance ratings deviated from flat or ideal, the products received frequent comments related to coloration, boomy bass, and uneven midrange. While the distortion ratings were highly correlated with preference, more investigation is needed to determine the extent to which the distortion ratings are related to a possible scaling bias known as the “halo effect."

In part 3 of this article, I will present the objective measurements of these products - both anechoic and in-room acoustical measurements - to see if they can reliably predict the subjective ratings of the products reported here.


References
[1] Sean E. Olive, “ A Multiple Regression Model for Predicting Loudspeaker Preference Using Objective Measurements: Part I - Listening Test Results,” presented at the 116th AES Convention, preprint 6113 (May 2004).