An ongoing controversy within the high-end audio community is the efficacy of blind versus sighted audio product listening tests. In a blind listening test, the listener has no specific knowledge of what products are being tested, thereby removing the psychological influence that the product’s brand, design, price and reputation have on the listeners’ impression of its sound quality. While double-blind protocols are standard practice in all fields of science - including consumer testing of food and wine - the audio industry remains stuck in the dark ages in this regard. The vast majority of audio equipment manufacturers and reviewers continue to rely on sighted listening to make important decisions about the products’ sound quality.
An important question is whether sighted audio product evaluations produce honest and reliable judgments of how the product truly sounds.
A Blind Versus Sighted Loudspeaker Experiment
This question was tested in 1994, shortly after I joined Harman International as Manager of Subjective Evaluation . My mission was to introduce formalized, double-blind product testing at Harman. To my surprise, this mandate met rather strong opposition from some of the more entrenched marketing, sales and engineering staff who felt that, as trained audio professionals, they were immune from the influence of sighted biases. Unfortunately, at that time there were no published scientific studies in the audio literature to either support or refute their claims, so a listening experiment was designed to directly test this hypothesis. The details of this test are described in references 1 and 2.
A total of 40 Harman employees participated in these tests, giving preference ratings to four loudspeakers that covered a wide range of size and price. The test was conducted under both sighted and blind conditions using four different music selections.
The mean loudspeaker ratings and 95% confidence intervals are plotted in Figure 1 for both sighted and blind tests. The sighted tests produced a significant increase in preference ratings for the larger, more expensive loudspeakers G and D. (note: G and D were identical loudspeakers except with different cross-overs, voiced ostensibly for differences in German and Northern European tastes, respectively. The negligible perceptual differences between loudspeakers G and D found in this test resulted in the creation of a single loudspeaker SKU for all of Europe, and the demise of an engineer who specialized in the lost art of German speaker voicing).
Brand biases and employee loyalty to Harman products were also a factor in the sighted tests, since three of the four products (G,D, and S) were Harman branded. Loudspeaker T was a large, expensive ($3.6k) competitor's speaker that had received critical acclaim in the audiophile press for its sound quality. However, not even Harman brand loyalty could overpower listeners' prejudices associated with the relatively small size, low price, and plastic materials of loudspeaker S; in the sighted test, it was less preferred to Loudspeaker T, in contrast to the blind test where it was slightly preferred over loudspeaker T.
Loudspeaker positional effects were also a factor since these tests were conducted prior to the construction of the Multichannel Listening Lab with its automated speaker shuffler. The positional effects on loudspeaker preference rating are plotted in Figure 2 for both blind and sighted tests. The positional effects on preference are clearly visible in the blind tests, yet, the effects are almost completely absent in the sighted tests where the visual biases and cognitive factors dominated listeners' judgment of the auditory stimuli. Listeners were also less responsive to loudspeaker-program effects in the sighted tests as compared to the blind test conditions. Finally, the tests found that experienced and inexperienced listeners (both male and female) tended to prefer the same loudspeakers, which has been confirmed in a more recent, larger study. The experienced listeners were simply more consistent in their responses. As it turned out, the experienced listeners were no more or no less immune to the effects of visual biases than inexperienced listeners.
In summary, the sighted and blind loudspeaker listening tests in this study produced significantly different sound quality ratings. The psychological biases in the sighted tests were sufficiently strong that listeners were largely unresponsive to real changes in sound quality caused by acoustical interactions between the loudspeaker, its position in the room, and the program material. In other words, if you want to obtain an accurate and reliable measure of how the audio product truly sounds, the listening test must be done blind. It’s time the audio industry grow up and acknowledge this fact, if it wants to retain the trust and respect of consumers. It may already be too late according to Stereophile magazine founder, Gordon Holt, who lamented in a recent interview:
“Audio as a hobby is dying, largely by its own hand. As far as the real world is concerned, high-end audio lost its credibility during the 1980s, when it flatly refused to submit to the kind of basic honesty controls (double-blind testing, for example) that had legitimized every other serious scientific endeavor since Pascal. [This refusal] is a source of endless derisive amusement among rational people and of perpetual embarrassment for me..”
 Floyd Toole and Sean Olive,”Hearing is Believing vs. Believing is Hearing: Blind vs. Sighted Listening Tests, and Other Interesting Things,” presented at the 97th AES Convention, preprint 3894 (1994). Download here.
 Floyd Toole, Sound Reproduction: The Acoustics and Psychoacoustics of Loudspeakers and Rooms, Focal Press, 2008.
Hi, it's a very great blog.ReplyDelete
I could tell how much efforts you've taken on it.
Thanks. I like the name of your blog!ReplyDelete
You keep coming up with great articles. I nominate you audio guy of the year! Thanks for taking the time.
P.S. It is amazing to me how hard some audiophiles cling to their idea of bias being everyone else's problem.
Great article Sean! I added a link to this from my Articles page. I'll be sure to post the link in Stereophile's forum too.ReplyDelete
Thanks. It's OK for audiophiles to have biases (I have my own), as long as they acknowledge that these biases often get in the way of the truth -- as this study has shown. If we want to determine the true sound quality of a component, the listening test must be blind.
Thanks for the link on your page. You put me right above "Acoustic Treatment Exposed". How do you expect me to ever compete with that? - :)
Some food for thought -
There are several reasons why the audio community is hesitant about blind preference tests. You may have heard them already:
1) Context is very important for perception and judgment. This may lead to biases in a sighted test. However, in a blind test, removing the context, reduces your ability to judge altogether. Differences that you clearly heard before diminish. That sighted tests are wrong doesn't mean blind tests are right. This is a brain problem.
2) There is no reference/ anker in a preference test. Who knows how the chosen material is supposed to sound exactly? Listeners who have heard it too many times think it should sound like as they heard it with their so far preferred loudspeaker. Listeners who hear the material the first time don't know at all. The least preferred loudspeaker in the test might just have revealed faithfully how distorted and bad the mix was.
3) In short-term preference tests, timbral attributes are usually dominant. Small differences in loudness or low-Q resonances influence your judgment most, whereas in a long-term "relationship" with the product other attributes such as low-level transparency (distortion), dynamic range (the ability to reproduce transients), stereo imaging, become more important for relaxed listening to music than timbre. You can easily adjust your preferred level, and get accustomed to low-Q resonances.
Let's have a nice discussion!
What an interesting test! My husband and I (both trained/experienced listeners) noticed this effect recently when we swapped out our tv for one with better resolution. We both perceived that our audio system sounded better than before, even knowing that nothing in the system had changed. I may have to experiment with a little "choice architecture" with my clients (like reviewing mixes against compressed Quicktimes versus a color-corrected master!)ReplyDelete
As you well know, there have been scientific studies that confirm that the quality and size of picture influences the perceived quality of sound. The bi-modal sensory interactions work both way although the video's influence the perception audio on is much stronger than vice versa. I used this argument to convince my wife we needed a new large HD video display since the 28 inch CRT was making my surround audio system sound sound thin, not enveloping, with low level granular distortion and noise....
Another (albeit more risky) approach towards making your customers believe your mixes sound better is to start charging them more. They may not think your mixes sound as good as they do, simply because they are not paying enough for them :)
Sorry, I'm in a particularly cynical mood today..
Gordon Holt may be right. I believe some people nowadays buy crappy audio systems on purpose, to distantiate themselves from the preposterosity that has become associated with audiophiles.ReplyDelete
Dear Sean, thanks for your very interesting blog. Being a scientist (I am a MD directing clinical trials in a major pharmaceutical company) and a musich enthusiast at the same time, I have always been surprised by the cohesistence in the same hobby of a strong scientific background and a "magic" approach.ReplyDelete
I personally started to understand the influence of the "sight" when I listened in my "very audiophile" system a 200$ CD-DVD player thinking that a 10,000$ CD player was plugged in.
By trying to support a more scientific approach to music reproduction (and at least a blind evaluation of "snake oils") I have been banned by an "audiophilic" forum so I started my own (Il Galileo Audiofilo, I am Italian).
I have argued that many of the same reasons for conducting blind drug trials apply to conducting a blind listening test over a sighted one. As a rational person,would you rather choose a drug that had been passed by the FDA based on open or double-blind clinical trials, and why? Apparently my analogy between the two is not relevant according to some audiophiles on the Sterophile thread about my blog article ( see http://forum.stereophile.com/forum/showflat.php?Cat=0&Number=64883&page=0&fpart=all&vc=1&nt=21)I would be interested in hearing your comments on this topic since you an MD doing clinical drug trials, as well as audiophile.
I hope that audio consumers don't throw in the towel just because there is a negative image associated with the term "audiophile." If the industry cannot give consumers reliable subjective data - we could at least give them perceptually-relevant objective measurements and product specifications, so consumers can make more intelligent purchase decisions as I discuss here: http://seanolive.blogspot.com/2009/01/what-loudspeaker-specifications-are.html
Hi Sean, I would agree with your conclusion.ReplyDelete
I think we evolved to make quick assumptions by fusing all sensory input to enhance survival. There are trade-offs in our sensing & cognition (cf: perceptual coding, optical illusions, magic tricks, etc.). In modern life, focusing on some cerebral issue, we often fool ourselves, thinking we can make an expert assessment without subjectivity or bias.
With loudspeakers, even more than physical appearance, the 'purchase price' is big confounder. A poor correlate with performance.
- Rich Sulin
As the new editor of Hi-Fi+ in the UK (perhaps one of the most 'out there' of audiophile magazines), I guess I am the Loyal Opposition. As such, I respectfully disagree with your suggestion of dishonesty in sighted tests.
The word 'dishonesty' implies some kind of deceit in the actions of the reviewer. Although I cannot speak for all subjective reviewers at all times, I suspect most would view their actions as being principally honest, but holding to a different set of values to yours. There's an obvious analogy here; a conservative might be fundamentally opposed to the viewpoint of a liberal (or vice versa), and may even express incredulity at those who support such a position, but still respect the integrity of that stance. Or at least, that used to be the case, but I suspect “I disapprove of what you say, but I will defend to the death your right to say it” is passé now.
For my part, I maintain that sighted tests can reflect the real-world conditions in which people choose and use their products. For example, because blind tests are inherently level-matched in design, they do not take into account the way products are evaluated by listeners in reality.
Here's an interesting test to explain what I mean: run a blind test a group of products under level-matched conditions. Then run the same test (still blind), allowing the users to set the volume to their own personal taste for each loudspeaker under test. From my (admittedly dated and anecdotal) testing on this, the level-matched group will go for the one with the flattest frequency response, as will those who turn the volume 'down', but those who turn the dial the other way often choose the loudspeaker with the biggest peak at around 1kHz, saying how 'dynamic' it sounds. I wrote on the topic in the early to mid-1990s (I believe it was in Hi-Fi Choice magazine, but the magazine's back-catalog is long gone now).
Unlike some of my colleagues, I am not opposed to blind-testing, in part because my of my previous work with Hi-Fi Choice in the UK (which does still - at least partially - continue to run blind tests). However, I am keen to explore all potential avenues to see if audiophiles are hearing things, or hearing things. As such, I think there might be something other than double-blind ABX that has some degree of scientific credibility, and which might be able to answer this... such as longitudinal testing.
I welcome your comments on the subject.
Editor, Hi-Fi Plus magazine
Thank you for your response. I appreciate your feedback, and I am sorry if I caused you offense.
It was not my intent to single out audio reviewers for not doing blind tests. Indeed, most audio manufacturers don't do controlled listening tests as part of the product validation and testing. If they have comprehensive perceptually relevant objective measurements in place, then listening may be less important.
I don't think I implied reviewers are intentionally deceitful and dishonest. The word “dishonest” was used to describe the sighted test methodology itself. It fails in measuring the true sound quality of the product due to the influence of listeners’ psychological biases. The listener may not even be conscious of these biases, in which case, they could hardly be accused of being “dishonest” or “deceitful.” I can hardly be blamed as deceitful if I choose the red speaker over the light green loudspeaker, because it sounds louder and more powerful (like a red Ferrari).
Most audio reviewers I've met are decent, honest, intelligent people trying to do the best job they can given the limited time, budget and resources at their disposal. Most reviewers who visit Harman, tell me they would love to have access our listening facilities or have something like it for reviewing products. Given the choice, I think most reviewers would use a combination of blind and sighted tests.
I agree that sighted tests have a purpose, particularly to determine the influence of the visual factors (brand, price, design, advertising) on consumers’ perception. This allows audio companies to optimize the right balance of sound quality versus other important design/marketing variables (industrial design, advertising, etc) and predict consumer acceptance in the marketplace. Also, it doesn’t require a blind test to establish that a speaker sounds unacceptable due to audible rub and buzz.
Your example of having listeners adjust the level of different speakers to their preferred taste, to me, correlates with how much non-linear distortion or power compression the speakers have. Listeners will tend to increase the volume until the speaker and/or their ears begin to produce high-order distortion. Be careful: If the loudspeaker is a JBL Everest - you may find yourself listening at dangerously high SPL levels (>110 dB peak) before you realize it!
No offense taken at all, and from what I gather, I would be one of the envious of your facility.
My goal in writing here is arguably the same as I hold for the magazine; that there needs to some kind of rapprochement between the objective and subjective sides of the business. This is a long-term goal, I need to build a foo-broom with a longer handle first ;)
1) I agree that context is a factor when judging the quality of audio, wine, food,etc. Wine reviewers argue that they would rate a French Bordeaux from a well-established producer differently in a sighted test knowing its track record of how it historically ages over time. Still, I think a combination of blind and sighted tests can best deal with that. Many consumers aren't interested in aging their wine and want to know how it tastes today.
Your 2nd and 3rd points are not specifically related to blind tests but apply to all listening tests in general.
2) There is nothing to prevent an experimenter from putting anchors in preference tests to help anchor and calibrate the scale. Anchors are recommended in ITU-R MUSHRA standard. Training listeners leads to more consistent use and interpretation of the scale, and familiarizes them with the program material. It's true that listeners never truly know how the recording should sound unless they were in the recording control room at the time when it was mixed and mastered. That doesn't seem to be much of an obstactle for trained listeners, who can listen to several different programs, and separate the distortions in the programs (which are constant among speakers) from those in the speakers. Clearly careful selection of neutral sounding recordings helps out here. We tend to use recordings made by recording engineers with a track record of making excellent recordings - like George Massenburg.
3) I am not aware of any studies that support the claims you make. In fact, in some cases, I would argue the opposite: Over long term, listening to a speaker inisolation, I suspect people, up to certain point, adapt to problems loudspeakers that are clearly audible and more objectionable in comparative blind tests. Also, there is nothing to prevent people from doing extended double-blind tests to see if your speculations are true or not. If people are not reporting distortion/spatial differences in listening tests over short periods of time, then the test probably hasn't been designed to focus listeners' attention on these differences. This can be addressed by training listeners to be sensitive to these differences and designing tests that require listeners to rate the test objects on spatial and distortion scales. Distortion is often not a factor for larger loudspeakers until the playback levels get rather high.
Great article! Having read reviews (or to be precise sheer lunacy) of power cords and interconnects for $20000/m I really enjoy knowledgeable blog!ReplyDelete
Regards, Andrei (aka MacGuru)
being an author/reviewer for a hifi magazine myself, I enjoyed your article about double blind listening tests very much. However, as it's a German magazine (Image Hifi), your mentioning of a "... lost art of German speaker voicing" caught my attention. Would you mind elaborating on this?
Thanks. I am happy you enjoyed my article on blind tests. The remark about the lost art of German speaker voicing" was mostly stated as sarcasm -- since there is no scientific evidence that Germans like different sounding loudspeakers than the rest of the world.
Harman used to employ a speaker designer in Germany who would re-voice cross-over networks for Harman loudspeakers sold in the rest of Europe so that they fulfilled the distinctively different German taste in sound quality. Such a claim was never validated -- but the practice lead to 2 different SKU's being released for Europe: one for Germany and 1 for the rest of Europe.
When we performed the blind versus sighted tests it included two otherwise identical speakers with different crossover networks (one voiced by the German designer and one voiced by someone else for the rest of Europe). There were slight measurable differences in frequency response, but the blind listening tests showed that neither Germans nor the other listeners could reliably formulate a preference between the two. We stopped making a separate SKU for Germany based on the belief that Germans like the same high-quality accurate loudspeakers as preferred by the the rest of the world. To my knowledge, no loudspeaker company do "country" or "culturally-specific" loudspeakers. The same model is sold in all markets. If you know of any examples, I would be very interested in hearing about it.
thanks for the quick and detailed response. Very interesting!
Germans do like the sound of loudspeakers from all over the world now, but things were different in the 70's and early 80's, when German music lovers suffered from the "Taunus sound", i.e. loudspeakers voiced to have something like a built in loudness effect, with lots of bass and super analytical treble. This sound was named after the mountain area where many (West-)German loudspeaker manufacturers resided (near Frankfurt).
The Taunus sound made Brit-Fi big in Germany, since many listeners flew its acoustical harassment and turned to brands like Mission, Naim, Linn or Celestion instead for their mid-emphasizing, "musical" sound; or they converted to the tube-friendly, "lyrical" sound of french brands; or chose the powerful, muscle-loaded sound of US made amps and speakers, which were designed to match the wooden houses with their bass absorbing walls ...
The times, the are a'changing, today Germans develop loudspeakers for British brands ... But IIRC, a former German distributor of Martin Logan loudspeakers modified their crossovers, changing some components for "better" ones.
I would be very interested in learning more about the "German voicing" you mentioned. I wouldn't mind if you contacted me by mail on "my first name""at""my last name".de.
Hello Sean -ReplyDelete
It makes sense that listeners would prefer accurate / neutral speakers for well recorded / "balanced" recordings, but there is such a wide range of recording quality out there that I wonder how this is taken into account in your testing?
For instance, if a consumer likes to listen mainly to Pop recordings (which I find tend to be too bright through an accurate speaker), wouldn't the consumer gravitate towards speakers with rolled off treble to compensate for the brightness in the recording?
How do you select the recordings you use for testing?
Thank you for your comment. You make a good point: program material is a significant nuisance variable in loudspeaker tests due to the lack of standardization in the recording industry in terms of the loudspeaker monitors and rooms used to make recordings. As a result, the sound quality of recordings is quite variable, and your opinion of the loudspeakers may vary depending on the quality of the recording itself. Harman makes loudspeakers for both recording engineers (JBL LSR) and consumers, and the performance targets are exactly the same. In this way, there is a greater chance that consumers have a better chance to hear what the artist intended.
We deal with loudspeaker-program interactions by choosing programs that are neutral and revealing of loudspeaker problems based on statistical analysis of listener training data and results from product tests. If the program is "too bright" or "too dull" this will show up in the statistical analysis of the data and we can correlate this to acoustical measurements of the loudspeaker.
Also, since we use trained listeners they become intimately familiar with the programs and their sound quality idiosyncrasies, and to a certain extent learn and compensate for recordings that are slightly "bright/dull", "thin/ full" etc.
2+2 short questions:
(1) Were the differences for the blind listenings across the 4 loudspeakers statistically significant ?
(2) What suprised me were the large differences for the two speaker positions (for blind listening). Especially the more expensive HK speakers fall behind. Can you give some more detail on the speaker placements (e.g. was one of them an "extreme" one, like close to a wall)
(3) I cannot access the full papers through my University (UMD.edu). Would you mind sending me the papers to nico(at)cs.umd.edu. I am interested in how you controlled the order of your participant groups.
(4) Few years ago I was motivated myself to do such experiments and to publish online: http://hifiexperiments.blogspot.com
(well it turnded out to be too much work to do besides my PhD in Computer Science (and recruting enough study subjects is hard) - still the empirical component is the same and lots of the issues in the fields are the same - maybe Empirical Software Enginieering is some years ahead in convincing people to validate results with the proper studies)
From a perceptual perspective, if one feels a better looking set of speakers sounds better than a worse looking set, does the buyer really care???ReplyDelete
Similar results from experiments with wine and food colorings - white wine colored red tastes like red wine!
I find most amusing that the auditory assessment of a hifi system usually assesses a synthetic experience totally created at the mixing desk. So there is no clear reference to reality, but only a perception of an artificial reality compared to a notion of an imagined reality.
Having recently been to a few live orchestral performances, I think I prefer the recording to the real thing, since the real thing only sounds really good where the conductor's head is and I can't sit there!
When an audiophile fails to recognize a difference between tweaks in a properly conducted test, it always seems that it is the test that fails, not his fallible senses.ReplyDelete
I am a software development manager with a Computer Science and Elec. Engg degree and a audiophile for more than a decade. I am not averse to either sighted listening or blind testing. My main opinion is that the review should be done over a long period of time (say 3 or 4 months). I think it is not possible to do double-blind testing over long periods and that is why reviewers resort to sighted testing. I do not have high-priced equipment but I do think that there are some technologies and sciences that have deemed "beyond human levels of detection" without proper studies. I have found many audiophile products do make a difference, I only wonder why they cost so much.ReplyDelete
Sean wrote: "An important question is whether sighted audio product evaluations produce honest and reliable judgments of how the product truly sounds."ReplyDelete
Great article overall, very revealing (no pun intended). Puzzled/troubled by use of the word "honest", which implies some form of or "dishonesty" in sited evaluations. Dishonesty implies lying or willful disregard for truth. Do you stand by this characterization? I'm sorry to say that your choice of this word seems consistent with a rather odd and unreasonable bias on the author's part; a desire to attach a more negative opinion than is called for on the sited evaluation.
Thanks for your comments. In retrospect, my choice of the word "Dishonest" in the title was perhaps too strong and sensational. Sighted evaluations have their use(I use them sometimes myself), and doing one doesn't necessarily imply willful disregard for the truth. However, people who do sighted listening tests should be aware of their limitations and potential biases, as demonstrated in this article. Unfortunately, many people in our industry routinely report results from sighted listening evaluations without regard to or acknowledgment of these biases or limitations. Some even go as far to argue that sighted tests are more accurate and less biased than blind tests. Call that whatever term you feel is most appropriate: unprofessional, lack of journalistic integrity, _________
My opinion as stated in the article is that the true sound quality of a component can only be reliably measured via a blind test. Anything less than that may be a willful or unwillful distortion of the truth.
Great article! Having read reviews (or to be precise sheer lunacy) of power cords and interconnects for $20000/m I really enjoy knowledgeable blog!ReplyDelete
Is there any legal ramifications of selling a strictly audio product?ReplyDelete
<> I don't understand your question. But if I do understand correctly, the anwer is "no".
i was banned from a number of guitar forums for advancing the same ideas. it seems like most people on the guitar forums believe that sound quality always goes up with price, and in direct proportion. and they are EXTREMELY resistant to the concept of blind tests.ReplyDelete
one guy actually recorded identical clips with 2 Fender guitars (one was several times the price of the other) and had people try to pick the better-sounding guitar. 60% of the people chose the cheap guitar.
Hi Sean, good article, and props for promoting some kind of sense in the industry. By dissing objective measurement, the audiophile industry has thrown the door to snake oil salesmen WIDE open ($1,000 interconnects just the start), and the sheer senseless of their claims has essentially destroyed consumer trust and even understanding of the technology.ReplyDelete
I was hoping to read the thread you mention as being here http://seanolive.blogspot.com/2009/01/what-loudspeaker-specifications-are.html
...but it's disappeared. Update link would be appreciated (I can't eve search on it, having no keywords to use.)
Hi Sean, thanks for publishing a robust and objective article. There is a more sinister side to the Audio Press that is perhaps never discussed in public but nonetheless I think needs to mentioned. That's the link between paid advertising and favourable equipment reviews - having worked for an audio manufacturer who one year shelled out over £100k for a years advertising, not one of our equipment reviews scored less than 85% overall rating, some higher. Couple this with a few free equipment giveaways you could say that we'd bought our sales for that year. At first it seemed like something that was accepted, what bites is when the reviewer tells you after a couple of wines that they can't stand the harshness of such and such amplifier, your CD player sounds dated and your speakers need work to clean up the mids...yet none of the reviews published ever mentioned such flaws....ReplyDelete
Hi Sean, Thanks for publishing a robust and objective article. There is a more sinister side to audio reviews and the direct linkage to paid-for advertising that I thinks needs to be mentioned.ReplyDelete
Having worked for a firm that spent over £100k on paid-for advertising for one year - quite taken back at the cost but, I was told by management that it was just the cost of doing business. You just accept it I guess, I was only there to work on the design of a specific product and then exit at the end of my contract - no problem, I got paid well.
What kind of bugged me was that not one of the products reviewed that year by the magazine concerned scored less than 85%, several where even awarded outstanding product categories etc;
What really got me was when one nigth after a couple of wines with the reviewer he went on about the needed improvement of several of the products he'd reviewed for us that year..."such and such an amplifier really lacks the power, drive and presentation for amplifier of its price point that it's being positioned in the market at...your CD Players sound dated and lack many playback features, you guys really need to get with the rest of industry and stop using those dated xxxxxx chipsets...you guys really need to tidy up the mids in your speaker range, for thier price point they really do sound veiled and I can't help but think their distorition is high in the mids blah blah..." What took me back further was that the products he was referring to, were the ones he'd reviwed that year with glowing reviews.
I can't say with any certainty that this is common place, but, coupled with equipment giveaways, we bought our sales that year;
I was later told by our marketing guy that the industry works on a "...we'll scratch your back if you scratch ours basis...money talks and we don't like to rock the boat...at the end of the day their is enough money in it for all of us"
Any review I read now, I always check the magazine for paid advertising for that specific brand, if present I know I should be taking that review with a very large grain of salt...rather than just the usual subjective grain of salt...
I like your work, you are a pretty sharp pencil, I have listened to every level of audio over my 50 years on this planet, and in the end fell for Heil air motion stuff. I can tell the difference between a good DAC and a cheap circuit in a cheap CD player, I guess if you spend a bunch of money your mind simply makes it sound better, I do like all the fancy cabinets and stuff though, but most people who can afford it are to late in life to hear it.
I used to be able to hear the 15 Khz flyback transformer in a CRT Television but no longer can two reasons, one my age, and Two do they even exist??? I have have built amps from scratch both solid state and tube speakers also. Regards Julien
i like your articles.. very interesting. and i like your blog to.. keep posting..ReplyDelete
KUDOS for taking a stand and showing there are at least some actors in the industry that don't want to sell snake oil!ReplyDelete
I became interested in hi-fi in my early teens, and believe it lead me to take a deeper interest in music than I would otherwise have. Later, I went to university studying microelectronics, and the love affair became a troubled one. I'm no audio engineer, but learned more than enough to make it rather impossible to deny that hi-fi magazines simply HAD to be full of nonsense. When a CD marker pen was given a rave review, I'd had it - anyone who knows how a CD is read will realize that the supposed mechanism of the pen is simply made up!
The thing is, most people are ignorant of the many and very well-documented reasons to be sceptical of our own perceptions. Most people have heard of the placebo effect, but not many realize just how powerful it is, nevermind slightly less well-known psychological phenomena such as priming, anchoring, and so on. I think most of us have assumed (I certainly did!) that our senses act pretty much like sensors, giving us raw data that we consciously interpret. But the fact is that perception IS an interpretation - we are all perfectly capable of seeing, hearing, smelling, tasting and feeling things only because our brains are constructing a coherent experience.
Michael Shermer did a TED talk on the topic "why people believe weird things" a few years ago. In it, he does a live demo that I strongly recommend anyone to check out. Here you have a chance to experience for yourself how powerfully priming alters your perception. Shermer plays a recording of Stairway to Heaven in reverse, asking you to listen for "the message". Then he plays it again, SHOWING what you're supposed to hear. The audio is the same, but your perception of the audio will be totally different (unless you know the supposed message from before):
The whole talk is worth watching (11 fast-paced, interesting minutes), but I've linked directly to the priming demo for your convenience.
Thanks for the link. It's an entertaining talk. Thomas Edison was a master at priming people to believe his phonograph recordings could not be distinguished from a live performance.Delete
What you say is true. I've seen, and still see idiots vouching for a particular brand over the other, no matter what the competitor's offering might be. For example, "Bose is the best!", or "You can't beat JBL in pro audio." What kind of crazy shit is that?ReplyDelete
I've even seen people taking their brand loyalty to crazy levels, they actually tune themselves to like the sound of their favorite brand's offering, even if they don't like it at the start. Hows that?
Unfortunately, the mind game players at these companies know this, they know what their business booms on. Hence rational ideas like yours got discouraged.
Why is it that the scientific method has to be applied to the business of making judgements about sound, e.g. using the double blind method, but the double blind method itself is not subjected to these same scientific methods but instead taken on faith? Where has anyone produced proof that the double blind listening test actually works? I have not seen any. It seems to be assumed that it will work. I thought science was supposed to be sceptical of anything that has not been proved.ReplyDelete
The fact is of course that it can never be proved to work. Since the object of the double blind test is to remove psychological bias, which itself can never be "seen" or examined in any meaningful way, there is no way to know if it works. The fact that during double blind tests people fail to be able to differentiate between different sound sources is taken to indicate that the test works, but this could indicate not that the test really is working by removing psychological bias, as it was designed to do, but that it is removing the subjects abilities to differentiate differences by other unknown effects it is having on the subjects psychological state. In other words the test could be negatively affecting the ability of the subjects to differentiate sound by removing essential visual cues that the human mind requires in order to remember sound sources for comparison.
It reminds me of the old joke about a circus trained spider which hears through its legs. The circus trainer gives verbal instructions to the spider to move this way and that, and the spider obeys. The trainer then cuts off the spiders legs and gives the same verbal instructions to move, but of course the legless spider remains still. "See" says the trainer, "cut off its legs and it goes deaf!".
I am sure the double blind test does what it claims to do in removing any visual induced bias, but i am equally sure it does a lot more besides, removing essential visual cues which set the mind adrift in a vacuous kind of state that cripples the subjects mental abilities and in effect makes all but the largest and most obviously different sounds sound the same.
This situation suits the objective measurements-are-everything brigade who want a test that will prove what they already believe, that people cannot really tell the difference, so for them this is a test sent from heaven. I think the reality is that this is the test from hell.
The plain truth is that people listen to sound in there head using their brain, a subjective mental process that will only ever be that. The human mind is not and never will be a laboratory measuring instrument, and the double blind listening test will not and never will turn it into one.
I'm not sure I understand you line of thought. Double-blind testing works by removing psychological biases from the test. If you get sensitive, accurate and repeatable sound quality ratings from listeners that can be predicted based on objective measurements that tells me that they do work, when done properly.Delete
When people get null results in blind tests either the test is not sensitive enough (untrained listeners, poor test signals) or perhaps the audible differences are below detection threshold. That doesn't prove the blind method is invalid or doesn't work. It just means that the test is not well designed or the differences are below threshold for the subjects.
What has been shown is that sighted tests can be manipulated to produce whatever test result you like. People's preferences for audio amplifiers, cables, wine or drugs can be influenced simply by telling them how much it costs -- even when the comparisons are between the exact same test object.
My experience is that blind tests are more sensitive to small audible differences than sighted tests. When the non-auditory cues are removed people can focus on the audible differences better and you see more sensitive and repeatable results.
That said, I still think it's interesting to study how people perceive audio products under non-laboratory conditions. These are typically single-stimulus tests -- not comparisons among different products -- are less sensitive. Also, adaptation plays a role and over time people probably adapt to the sonic characteristics of the product.
I linked this post to a Stereophile article here:ReplyDelete
Hope you don't mind.
Great article, it's re-ignited my interest in quality HiFi, thanks for reminding me that it's about what you can hear and not what people tell you!ReplyDelete
Great post. I think it is good for visitors. I like this kind of website where has a lot of real information, It proved to be very helpful. Thanks for admin, His creativity, Presentation, Information and all is good.ReplyDelete