In this debate I think there will never be any agreement until they sit every Audio engineer and audiophile in their own room/studio. Repeat the blind tests and even then there are people who will argue the test is flawed.
I am just happy that my “dollar ears” saves me a vast amount of money, time and/or effort in the pursuit of signal path perfection!
I’m familiar with the study you provided… It does not paint the whole picture in regard to sound-quality of the final audition of the bit/word signal… This study does not measure non-linearities that contribute to perceived signal integrity…
Jitter is… The salient question resides with a bit for bit integrity analysis between the encoded signal read from the storage medium to DAC output, and whether or not this linear measurement is the definitive element in defining perceived sound-quality.
You must ask, why one digital-audio player sounds differently than another and why DACs sound differently from one and other… It all cannot be cognitive-bias.
Who knows, until someone puts together a truly scientific blind test, agreed in advance by all participants and has a sample size that is large enough to be representative?
I do believe cognitive bias is why I think the HugoTT2 sounds better than the Hugo2 otherwise I would have to accept I just wasted 4 grand! It is a pretty powerful bias. The wife refuses to spend the time facilitating a blind test of the two (she can’t be bothered plugging and unplugging cables and setting up the tracks for comparison) and just puts my curiosity down to “male tech insanity”.
I wish this study had developed a mechanism for quantifying the perceived sound-quality in juxtaposition … not just detection of temporal phase/timing cues…
Blind tests as usually conducted rely on echoic memory, which numerous experiments have shown lasts about 4 seconds, So an inability to reliably compare subtle differences in two music passages longer than about two seconds each is really just confirmation of that.
On top of the echoic memory problem, there’s what I’ll refer to as the “Iowa Gambling Task” problem. That’s an experiment where subjects are given decks of cards to gamble with, either a “good” deck or a “bad” deck. If they thought they had a bad deck, they were freely able to request a replacement deck. In the experiment, subjects on average showed elevated galvanic skin response (that is, they literally broke out in a cold sweat) ten rounds of play before requesting a replacement deck. So the subconscious effect and consequent stress occurred long before conscious realization. This is a problem with any experiment that requires a conscious verbal response. You might be having a significant reaction to a negative stimulus (bad sound?) even to the point of breaking out in a cold sweat, yet not have a conscious realization that you’re able to put into a verbal response.
I don’t mean to say at all that objective thinking isn’t valuable. But we’re not at the point yet where we’ve spent the money on experimental setups (for example, fMRI) that can reliably tell us at what point changes in audio characteristics begin to affect our subconscious emotional responses to what we hear.
Of course this cuts both ways - it applies equally when folks insist they can hear differences as when others insist they can’t. For myself I just try not to be silly enough to have my self-worth tied up in arguments about a hobby that should be all about enjoying ourselves.
The only way to resolve would get the top 200 audio engineers, the golden ears from each Hifi manufacturer plus the combined audiophile experts in the media/forums together, agree a test protocol which addresses all scientific concerns.
Of course seeing as the manufacturers would never participate in such a test for fear of the results emerging in a direction they did not wish then it is moot. If it sounds good to you who cares…
However, when qualifying differing levels of subjective quality of a common reproduction… a double-blind exercise will be revealing of those knee-jerk cognitive and emotional responses in the individual assessment preferences, and therefore contextual biases.
Yes, every single time, have never gotten it to work, it logins and does nothing after that, when controlling the software on Windows, it works fine. Please advise, thanks!
audirvanaOrigin.service - Run audirvanaOrigin
Loaded: loaded (/etc/systemd/system/audirvanaOrigin.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2024-05-03 23:22:03 EDT; 2min 54s ago
Main PID: 15853 (audirvanaOrigin)
Tasks: 77 (limit: 8986)
Memory: 8.4M
CPU: 37ms
CGroup: /system.slice/audirvanaOrigin.service
└─15853 /opt/audirvana/origin/audirvanaOrigin
May 03 23:22:03 arup-XPS-9315 systemd[1]: Started Run audirvanaOrigin.
May 03 23:22:03 arup-XPS-9315 audirvanaOrigin[15853]: 2024-05-03 23:22:03.440 [info]: ============= Started Logging, Audirvana Origin 2.5.12.3 linux =============
May 03 23:22:03 arup-XPS-9315 audirvanaOrigin[15853]: 2024-05-03 23:22:03.441 [info]: No language set in settings, using system language: en_US.UTF-8
May 03 23:22:03 arup-XPS-9315 audirvanaOrigin[15853]: 2024-05-03 23:22:03.446 [warning]: EULA not accepted yet, showing it
May 03 23:22:03 arup-XPS-9315 audirvanaOrigin[15853]: 2024-05-03 23:22:03.448 [info]: RemoteServer: Server started and advertised on port 45879
May 03 23:22:17 arup-XPS-9315 audirvanaOrigin[15853]: 2024-05-03 23:22:17.104 [error]: Not logged to Audirvana account: need to sign in
~
~