TBH, in 2025 I would not buy a new device with 100mbps ethernet specs.
I think these specs are outdated, sorry ![]()
An (almost) obsolete OS, with antiquated network speed, seemingly overpriced RAM, coupled with similarly overpriced SSD storage.
Two things spring to mind:
1 The majority of the eye-watering cost has gone on the (admittedly) lovely case, and
2 The cost is an absolute piss take.
IMHO, of course
======
Can Bandwidth Estimation Tackle Noise at Ultra-High Speeds?
Qianwen Yin Jasleen Kaur F. Donelson Smith
Department of Computer Science University of North Carolina at Chapel HillAbstract—While existing bandwidth estimation tools have been
shown to perform well on 100Mbps networks, they fail to
do so at gigabit and higher network speeds. This is because
finer inter-packet gaps are needed to probe for higher rates—
fine gaps are more susceptible to be disturbed by small-scale
buffering-related noise. In this paper, we evaluate existing noise-
reduction techniques for tackling the issue, and show that they
are ineffective on 10Gbps links.
https://www.cs.unc.edu/~jasleen/papers/icnp14.pdf
![]()
It seems this is implementation dependent… If the device is handling multi-channel probably not the best choice… However as we see, there is plenty of throughput for stereo DSD signals of reasonable sample-rates, especially when no native DSD512 or DSD1024 recordings exist, where these products are iterations of lower sample-rate 1-bit encodings… It also appears that @jmtennapel has made a salient argument for 100mbps interface implementation in the reduction of radiated noise at those frequencies, where this noise will influence the digital-audio signal waveform and precipitate jitter.
![]()
You understand this isn’t talking about the amount of noise, but the fact that small amounts of inter-sample noise interfere with bandwidth measurement at what was considered the “ultra-high speed” of 10Gbps in 2014 (the date of the paper), right?
@lwojtulewicz, as usual we’ve taken the discussion into all sorts of different corners. Is there any information or discussion we can provide that would actually be helpful? ![]()
Isn’t it that in the UpNP case the Ethernet port is just for usual copying file in Ethernet standard - where jitter or noise doesn’t affect the fidelity of the final analog waveform?
Surely this “native digital audio transmission” is not happens only in the next step? And only at this stage (comunication between streamer module and DAC module) dedicated optimised operating system and the implemented proper communication protocol matters for SQ?
Or may be Audirvana Studio actually clocks the signal sending it to the UPnP renderer over Ethernet already on the PC where is installed?
What do you think it’s like?
Noise is Noise… if it measurable, it exists as detrimental to signal integrity in the handling of the signal at the interface… It’s all about accumulated jitter… less noise is better…
![]()
No… there is no error-correction for digital-audio transmission in any event or endpoint status information feedback in UPnP protocol… The meta-data about sample-rate is transmitted, but there is no feedback from the endpoint. All digital-audio transmissions are sent isochronously and clocked asynchronously in the DAC.
Skew definition and jitter analysis
By Steve Corrigan
System Specialist, Data Transmission
Texas Instruments Incorporated
(From the Ti Data Transmission discussion linked below) –
Over the past few years, jitter has become a signal property
that many engineers take very seriously. Signal rise times
are getting much shorter in high-speed digital systems, and
slight variations in the timing of a rising or falling edge
are more important with each additional Mbps. The phe-
nomena of signal skew and data jitter in a waveform not
only affect data integrity and set-up and hold times but
magnify the signaling rate vs. transmission distance trade-
off, ultimately leaving a designer with a degraded system.
https://www.ti.com/lit/an/slyt179/slyt179.pdf
![]()
You’re getting warmer.
-
Which Ethernet spec requires not only a mandated level of jitter from the connection, but also that the connection must reduce any incoming jitter to a mandated level, thus preventing accumulated jitter beyond that specified level: 100mbps, 1gbps, or 10gbps?
-
Which Ethernet connection speed is most sensitive to noise, and must therefore keep noise at the lowest level to operate correctly: 100mbps, 1gbps, or 10gbps?
With good design, the clock that controls the timing will be in the DAC/streamer, as close to the digital/analog conversion circuitry as possible. Otherwise the distance between clock and conversion, even at light speed, makes undesirable drift and jitter possible.
If you have an async USB input to a DAC, the DAC controls the clocking (in other words, the desirable situation).
Hans Beekhuyzen has a YouTube channel. He has addressed your question many times. I’d recommend exploring that channel.
One starting point: https://youtu.be/FQFjFbjx2Sw?si=GtW-UOMC7PGdrmAb
The one thing in Beekhuyzen’s discussion I’d disagree with if I understood him correctly is that I believe he said measurements have not been developed for the effects of jitter on the analog sound. I’ve read any number of sources saying the analog effects of jitter are noise and spurious harmonics (harmonic distortion), and the levels of these can be measured. But perhaps I misunderstood him.
There are many forms of jitter, it is a category in which different kinds of sources or effects of disturbances in signals are grouped.
The effects of digital signal induced jitter in the analogue output after the DAC stage cannot be measured properly (yet).
The salient rhetorical question is…
What level of noise (jitter) is acceptable in the transmission of DSD signals?
It appears that a 100mbps receiver operates correctly and is appropriate for real-world DSD reception…The inherent need to reduce noise in the 10Gbps architecture, poses the question as to why it is required when a 100Mbps interface will do the job and the noise attenuation is also applied in the architecture. ![]()
Jitter? Signal Integrity?
@jmtennapel @Jud @lwojtulewicz @matt
The quintessential reference:
Jitter Theory
–Julian Dunn
http://www.audiophilleo.com/zh_hk/docs/Dunn-AP-tn23.pdf
Because of the amalgamation of non-linearities that influence the signal integrity in any given playback system… So qualitative interpretation is bound by subjective assessment.
You may want to rethink that statement after watching Ted Smith discuss clock drift as it applies to digital-audio signal integrity…
![]()
On the contrary, what he’s saying is in perfect agreement with what I said about having the clock as close as possible to the point of conversion (reference where he talks about such things as cables getting in the way of good accurate clocking, and causing jitter).
Thanks, I was going to reference that. Have a look at the graphs on pages 10 (Figure 15), 13 (Figure 19) and 14 (Figures 20 and 21). You will see quite good measurements - in 1990! - of sidebands (harmonic distortion) and “skirts” around the signal spike. The sidebands are harmonic distortion caused by signal correlated jitter; the “skirts” are noise caused by uncorrelated jitter. It sounds no different from any other noise and harmonic distortion. All this was known and measured routinely decades ago. I’m not sure why people think there’s some great mystery to it.
Anyway - we seem to be wandering off the topic again. It would be nice to see what we can do to assist the OP. In that regard, I’ll repeat what I said previously, which is that if it sounds good to him, or in the absence of an audition if what he has been able to learn gives him confidence in the manufacturer/designer, then go for it!
The answer comes from near the beginning of the Ted Smith video you linked, where he’s using his arms to indicate the angle of the analog wave carrying the digital information.
As signal frequency increases, by definition that angle gets steeper. Therefore the same amount of noise or jitter that would be negligible at lower signal frequencies/speeds will be more likely to cause distortion or in the worst case, transmission errors at higher frequencies/speeds. Thus noise and jitter produced or transmitted by the equipment must be minimized.
Notice the distinction between signal frequency on the one hand, and noise and jitter produced or transmitted by the equipment on the other. Since in the case of audio we’re dealing with a consistent signal frequency (DSD512, DSD256, whatever you like), there isn’t a higher level of noise and jitter being imparted to the Ethernet transmission system via the signal (or a greater vulnerability to noise and jitter that you would get with a higher frequency). And because the Ethernet transmission system in the case of equipment rated for 10Gbps and above is built to handle much higher bandwidths, it’s going to ensure that consistent level of noise and jitter from the signal is minimized, whereas equipment rated for lower bandwidths doesn’t have the same level of amelioration built in.
??? The slew-rate of the interpolator will determine the steepness of the ramp.
Any good electronics designer works under the shortest-path dogma… Speaking of clock signal path length, how many pico-seconds do you think will matter at the interpolation of the digital signal to an analog waveform?
The salient aspect of Ted Smith’s statements was implied in the ‘phase coherency/phase noise’ of the interpolations, where an abundance of timing error will be perceivable… as he states, a couple of parts-per-million of timing offset is not audible by anybody.
Thank you but It seems that unlike Roon, there is currently no streamer on the market with the Audirvana server on board. But the world is moving forward and everything is ahead of us.