Advice for purchasing Streamer

It is only important if you use DSD sources.

Yes, as an audiophile appreciator of music and the performance and production of music, both historical and modern…

DSD is the highest level of digital-audio recording/encoding. There is no other digital-audio encoding scheme that provides the resolution of 1-bit PDM (DSD) encoding. The most pure conversion to analog is made when the signal is delivered by a DAC with a pure 1-bit PDM signal path to a simple low-pass filtered digital to analog output circuit… What this means to you, is a personal matter.

I personally convert all PCM files to DSD in Audirvana using r8Brain, because 1-bit PDM (DSD) presents a more analog-like signal lo my DSD-centric DAC which produces a more natural sound to my ears, and delivers the full resolution and contextual sonic elements of the lower sample-rate PCM encodings in the process.

:notes: :eye: :headphones: :eye: :notes:

1 Like

Hi -

Several people, including the person who developed the DSD modulators for SoX and Audirvana, have run tests on various PCM and DSD resolutions, and have measured the lowest noise and distortion with higher rate DSD (DSD256 or 512). Whether the differences (quite low level) are audible is subject to discussion/argument. For myself, I feel why not go for the least distortion, since if it is inaudible it doesn’t hurt, and if it is audible it helps. So I have Audirvana upsample to DSD512 and send that to the streamer, which sends it along to the DAC.

You of course may find with your system and your ears that you like something different. :slight_smile:

Thank you for your answers about DSD.

I’m going to tried with my Lumin U2 a Harmony DAC Laiv. this is a R2R DAC and this kind of DAC must have a PCM multi bits signal in input.

1 Like

Like @Jud says: try if conversion to DSD is to your liking or not and don’t fuss about what other’s think. They don’t own your ears, so any opinion about what you ought to hear is not really of any importance.

3 Likes

It appears that @HTox has made a distinct decision to choose the Laiv HarmonyR2R DAC which converts all DSD signals to 705.6kHz or 768kHz PCM, which speaks to a well thought-out rationalization, in the context of personal playback preferences… I can see why some folks would go this way… Some will go the other way… PCM → DSD… It is a subjective decision that requires juxtaposition to fully appreciate tangible differences in the different results. :sunglasses:

:notes: :eye: :headphones: :eye: :notes:

Hello all,
I’ve just come back from holiday and I can take some time to write my test.

First of all, As I’m blind, I was able to carry out double-blind tests. And that’s no joke. In any case, it’s a very interesting experience. In my various tests, I’m sure that if I knew what I was listening to, I would have preferred the more expensive solution!

The aim of this test is to compare the USB A output of my Lenovo X1 carbon with a pure high-end streamer like a Lumin U2. In all cases, Audirvana is the starting point for the audio stream.
I tested two DACs: McIntosh MA5200 and LAIV Harmony DAC.

Points that don’t vary during testing:

  • Audirvana studio under Windows 11
  • the Lumin U2 is connected to Audirvana using the UPnp protocol. (no problem here)
  • No oversampling.
  • My speakers are Cabasse IROISE 3 connected with audioquest 4 cond spirale solid LGC Type 2 cables.
  • My amp / preamp is a McIntosh MA5200

Test 1: DAC integrated into the McIntosh
a. Audirvana → USB DAC
b. Audirvana → UPnP Lumin U2 → Coax DAC McIntosh

Here it was very easy to see a difference. If I had to say only one thing, it would be that the stage width is very much narrower with the USB input. I didn’t continue the test for long with such differences.

Test 2: LAIV Harmony DAC
a. Audirvana → USB Harmony → RCA McIntosh
b. Audirvana → UPnP Lumin U2 → Coax DAC Harmony → RCA McIntosh

Here, still double-blind, the trial lasted longer.
There were differences in the soundstage, but they were much less obvious. I’m not a specialist at putting my acoustic feelings into words, but in any case it was easy to hear a difference in quality that wasn’t down to accoustic preference.
The Lumin U2 was always a step above.

I was glad to do this test because a month ago, I didn’t think that there could be acoustic differences in the digital domain.
I’m convinced that there are almost never any digital errors in the digital audio signal.
But I think that the differences in quality are due to the clock that is sent with the digital audio signal and the noise that can propagate through the cable.
I’d like to know how the clocks synchronize between devices , but the info isn’t easy to find.

There may be solutions to rework the USB output of a laptop, that might be a good idea.

1 Like

Digital-audio signals are transferred asynchronously, with no error-correction… (The exception is Ravenna protocol which provides error-correction) …What the DAC receives is the meta-data about the sample-rate…

Noise related jitter “Jitter Stew” is generally the result of an amalgamation of noise factors along the signal topologies of your system components, which includes transceiver interfaces, interconnects, power and ground topologies and RF and EMF potentials that corrupt the leading and trailing edges of the packet pulses and bit pulses which force interrupts in the signal handling architectures and induce clock incongruences in the circuit paths…

There is no clock synchronization. In both cases, USB input and UPnP, the clock is solely in the DAC, which is where it should be for best jitter performance.

Noise propagating through the cable is a possibility. Or to put it another way, better isolation from any ground or noise currents that can flow through the system.

If the clock of the source is more slow than the DAC, the DAC must wait for next sample! I know there are PLL to fetch the clock of the input.

The “clocks” you are speaking of are very different. Let’s think of an analogy to illustrate:

You have buses that bring people to a place where they wait in line to enter an attraction. You want the people to go through the turnstile at the front of the line at the rate of 1 per second to keep traffic moving smoothly through the attraction. But in order to keep traffic moving smoothly, you need the buses to run often enough to supply a steady flow of people, yet not so often that people overflow the capacity of the line. So someone monitors the line and calls for buses as needed.

The turnstile is the DAC’s clock. It’s very important for this clock to be exact. On the other hand, there can be considerable variation in the timing of the buses, as long as the line doesn’t either run out or overflow. This is the bulk transfer of data that occurs via USB or UPnP to your DAC. So this is really more of a signal from the data buffer in the DAC that it wants more data through the bulk transfer, or that it is full and can’t take more data, than it is a clock that needs to have exact timing.

1 Like

The transceiver interface controller inherently operates using its own clock. Its the data ‘request and report’ interrupt signaling and the latencies integral to these send/receive requests that produce noise in the transmission… The receiver of the DAC is asynchronously buffering/cueing the packets before presentation to the DAC clock topology architecture where it takes control of the bit-signal, …
The quality of the transceiver interface and the level of send/recieve interrupts will have direct impact on signal integrity and noise related jitter.

Just saw this.

The 10G SFP+ spec can be found at the first link below, which provides on page 21 for a maximum total jitter of 28 picoseconds (translating unit interval to picoseconds at 10G).

I’m not contrasting 10G optical to 10G copper wrt jitter (10G copper is spec’d even lower, 5.5ps - that’s in IEEE802.3, and can be found on page 8 of the second link below), but to 1G Ethernet, which allows for 300ps of total jitter. That can be found in the table at the top of page 5 of the second link below.

https://members.snia.org/document/dl/25879

I understand, but I believed that SPDIF and USB work in asynchronous mode withou synchronisation.

S/PDIF requires a Master → Slave handshake to lock/synchronize the clock of the DAC (Slave) to the incoming bit-stream signal sample-rate from the source (Master)… USB digital-audio transfer protocol is asynchronous.
:notes: :eye: :headphones: :eye: :notes:

1 Like

Do you have some technical documentations about S/PDIF ?

Thank you. I want to improve my skills in digital audio to understand how to increase my system and avoid marketing trap!

Quoting from the Wikipedia link:

S/PDIF is used to transmit digital signals in a number of formats, the most common being the 48 kHz sample rate format (used in Digital Audio Tape) and the 44.1 kHz format, used in CD audio. In order to support both sample rates, as well as others that might be needed, the format has no defined bit rate. Instead, the data is sent using biphase mark code, which has either one or two transitions for every bit, allowing the original word clock to be extracted from the signal itself.

Edit: Note this does not contradict what @Agoldnear is saying about Master → Slave handshake.

1 Like

Well, that’s someones evening ruined then :thinking: :sweat_smile: :sweat_smile:

4 Likes