Network Switch

They be putting the MBZ star on the bonnet of the Lada :roll_eyes:

Amir always applies absolute objectivistic approach. This does not correspond to the reality. The good measuring device is not always the best sounding. But sometimes he has a point.

1 Like

@OffRode Regarding the EtherREGEN device… He certainly has defined a point of diminishing returns in the enterprise of the design… ASR does bring a level-headed approach to most of this stuff… however, the forum has made it’s point and needs to turn it’s focus on true double-blind testing processes… He could easily do what Floyd Toole and Sean Olive did with random listening panels of real-world subjects… Not to reinvent their findings regarding cognitive biases, but to corroborate the objective measurements…

Saw it too, the Illuminati signs made me fall of my chair :joy: :joy: :joy: :joy:

2 Likes

It reminded me of that trend a few years ago with those hologram bracelets.
What bothers me is that the manual doesn’t tell which incantations should be pronounced to activate those gizmos.

1 Like

curious - how a Gigabit switch is inferior to a 10/100M switch other than it probably being an overkill for audio applications… kindly let me know.

The theory is that higher speed signalling generates more noise and then filters out into components. Again, it’s a theory.

1 Like

@kvm

@bitracer is making the point, that we really don’t know to what extent accumulated noise has on the final playback audition, so why create a potential for more noise being injected into the signal if we can control these things?.. We see in the ASR investigations, that the fundamental reference being applied, is the analog output of the DAC… We must remember the DAC only works with the digital-audio signal it gets, and has limits to how it interprets the bits… ASR investigations are not analyzing the actual bits reaching the DAC in comparison to the bits being delivered over the network from the transmitter… They are showing the DAC is producing an identical analog output signal no matter which switch it is connected to in the testing environment…

1 Like

There is no doubt that the same bits are reaching the network port on the other end. Otherwise the internet would be broken.

The question is more if this the whole story. Can someone really tell the difference between two switches with everything else being the same.

Apparently, yes.

He claims that he can hear a difference when he changes the provider (cable vs fiber). That’s pretty wild claim.

Soon we’ll have „audiophile“ ISP providers that will supply routers with linear power supplies.

1 Like

That does seem very far fetched…

@bitracer … Of course the Internet can be broken… it’s the degree to which these breakdowns in code have influence on what we finally perceive… If you think this is perfect… think again…

This is true in the synchronous transfer with error-correction of the Ethernet protocol… however… If the DAC is using USB isochronous transfers from the DDC or network interface, all bets are off, as there is no error-correction involved in that transfer and in the asynchronous transfer from the USB receiver to the DAC clocking topology… and there is no error-correction in I2S data-flow… We see that in the USB isochronous transfer protocol, the distance the digital-audio data and signaling data, must flow along the transmission line, will have influence… What guarantees that data-waveform reads (On - Off) are interpreted accurately at the USB receiver or the I2s interface in the DAC and ghost code is not read…?

@ShitRock @bitracer
This ASR video “Understanding Jitter in Digital Audio: Measurements and Listening Tests” will provide a foundation for a rational approach to a digital-audio playback system configuration…

After viewing this video, we must ask ourselves how do we measure perceptual variances in our own playback experiences? I know personally, that I become accustomed to the sound of my playback system over long periods of auditions on a variety of music genre… If I personally make a change to the configuration of my system playback components, at what point are my interpretations of the new experience, clouded or influenced by cognitive-bias or not, or if my interpretations, either positive or negative, are based on a intimate familiarity with the sound of my system from the perspective of hours and hours of experiential auditions? …Personally speaking, I’d like to think that my interpretations are tempered by my awareness of potential cognitive bias influences being at play in the audition and after a series of subsequent auditions…

:notes: :eye: :headphones: :eye: :notes:

That’s true to a certain point. How much data corruption can you objectively expect over such a short lead? That’s also why I2S does not provide error correction.

Yes… this is the salient question in digital-audio transmission…

I added this to my prior post, really addressing the reality of our perceptions:

… Of course the Internet can be broken… it’s the degree to which these breakdowns in code have influence on what we finally perceive… If you think this is perfect… think again…

That’s why there is error correction implemented at protocol level. USB doesn’t do this when transmitting audio due to early implementations that were synchronous and the timing sensitive nature of the music reproduction.

1 Like

I keep coming back to USB protocol over Ethernet protocol because of the lack of protocol layers… Each have attributes given the system configuration… Maybe over a short distance, Ethernet transmission from the computer to the Ethernet receiver built into a DAC, is a non-issue…

It doesn’t really matter. Ethernet is designed for longer distances. On shorter distances I would always prefer USB, but just because there is no inherent benefit of using Ethernet in that specific scenario.

1 Like

@bitracer and @Agoldnear
Thanks for sharing your views. Can understand - sort of ‘less is more’.

However not sure how much of an ‘additional’ noise a gigabit switch contributes compared to a conventional fast ethernet switch…also, the gigabit switch identifies if the connected device is Giga-bit capable and not sure if it down-scales internal processing accordingly. Couldn’t locate any references on the above though.

Incidentally the above referenced audiophile grade switch from Sotm also support gigabit ports!
It would be great if we could obtain any objective inputs on how these differ w.r.t audio.