In brochures of the manufacturers from the 80's one reads frequently that oversampling was necessary for technical reasons and besides the sound quality increase.
But how did it really come about?
It was a coincidence. Without it, the history of the CD standard could theoretically have been quite different.
Very early, the British BBC broadcaster had propagated a digital-to-analog conversion with 13 bits.
Philips developed a digital soundcard as early as the 1970s, but it was unsuccessful.
The equipment was far too expensive, only 400 pieces were sold, the Dutch quickly stopped production.
But after all - the know-how from this optical disc player flowed at Philips into a 14-bit digital-to-analog converter for audio purposes.
Then Sony came as a partner for the CD introduction into play.
The Japanese recommended a 16-bit standard.
After some back and forth, Philips and Sony finally agreed on the digital standard of 16 bit and 44.1 kHz.
But Philips had no finished 16-bit converter.
What to do? Delay the launch of CD technology?
Nobody knows the background of such stories better than the people who were involved at the time.
Kees A. Schouhamer Immink was co-developer of this first CD player from Philips, the CD 100.
The origin of oversampling he described in a report:
"We at Philips argued that it was impossible to redesign the finished 14-bit converter to 16 bits in a short time.
But my colleague Karel Dijkmans said:
No problem, I know a little trick to turn our 14-bit converter into a 16-bit converter.
The trick is called oversampling.
Marketing will then make the virtue out of necessity.
The more building density grew in the further development of the chips, the more functions were implemented, the more the chances of correcting the wrong direction had disappeared.
Today, there are hardly any converter chips that do without built-in oversampling and complete output stages of the simplest design.
Although these converters make it possible to cheaply produce millions of small cell phones and MP3 players.
From the ideal lifelike rendering they increasingly remove.
Thus, they have nothing to look for in a high-quality music playback device.
The dilemma of developers today is that they can no longer rely on other chips.
By removing the digital filters, the temporal precision of the music playback is audibly improved.
The sound gain, however, depends on a number of interlocking modifications.
The "advantage" of the digital filters used is in the reduction of subsequent analog filters - and not in a data increase by interpolation to add something that is not present in the signal.
This is indicated in the datasheets of these digital filters.
Analog filters, which are supposed to filter the listening area above 20 kHz, are more expensive in production with increasing slope (which results in a better filtering effect).
They also cause adverse phase shifts in the listening area.
The built-in Philips digital decoder (which sit in front of the standard digital filter) are allowed according to official information from the developer for direct operation on analog converters without digital filter - this is also clearly described in the data sheets.
Theoretically, from the point of view of an audio developer in operation without a digital filter, the analog filter would have to work with a larger slope.
As practiced in the first CD players from Sony.
The proponents of no oversampling often argue that filtering is not needed because human hearing can not perceive these frequencies. However, practice shows that even supposedly inaudible frequencies convey a better sense of spatial precision.
With increasing frequency, it is known that the sound bundling increases.
This makes it easier to identify the sound source with multiple receivers.
Although the human being has only two receivers (ears) and the information content itself is not consciously perceived, the spatial impression in the reproduction is significantly increased.
The music becomes more transparent, instruments or voices separate more clearly.
So you can do without oversampling and analog filters for a good reason.
However, the sole effect of no oversampling, ie the removal of digital filters, is slightly overestimated.
The advantage lies in the fact that the (inevitable) disadvantages of digital filters are eliminated.
For music playback this means: Artifacts are eliminated that are not present in the original signal.
These result from the overlaying of several samples, some of which deviate from the original sample.
In addition, the malfunctioning digital filter causes contamination of the digital power supply.
The 14-bit Philips CD players, with their largely discrete design are an invaluable asset. However, a release from the digital filters is only one step in many of them.
In addition, a comprehensive restoration should provide the constructive basis for all interlocking measures.
It's exciting to create something new.
Some grab a soldering iron, others start a computer simulation.
Everyone seems to have their own concept.
In my case, it starts with going back to basics.
After examining the following two aspects, I came to the conclusion that it is problematic to change the original signal by oversampling.
1st Oversampling and jitter
There are two axes in the digitization of the sound.
The time axis and the amplitude axis.
In the case of the CD these are 44.1 kHz and 16 bits.
16-bit accuracy means how exactly the acoustic energy (time x amplitude) is distributed in each of the 16-bit increments.
In other words, we pack into the amplitude of each of the 16 bits every 22.7Ásec. Dates.
This results in a maximum error rate of +0.5 LSB.
However, these errors only affect the amplitude axis, no errors are allowed on the time axis.
The more accurate the amplitude data, the easier it is to distribute errors along the time axis.
If we divide the half (1/2) of the errors, the result is (1 ¸ 44,1khz ¸ 2 high 16) ¸ 2 = 173psec.
This is the upper limit of the permissible errors, the jitter.
Acceptable errors at 44.1khz/16bit
acceptable error at 8x oversampling / 20bit
At 8x oversampling and 20 bits, this error rate would be 1.35psec. mean.
This sounds good at first, but can not be achieved by an analogue converter, as this takes longer to reset its timing by means of PPL.
As a result, with an average jitter environment, oversampling even decreases accuracy within the work environment.
In short, oversampling does not preserve the original 16-bit precision data.
2nd Oversampling and high-bit
Originally, oversampling was developed to allow the use of an analogue filter with gentler features.
A common misconception is its intention to increase data - even if it results.
The principle of digital FIR filters is to move the original data and overlay it.
If this overlay is applied to the original data by multiplying the coefficients, much more data now wants to be in the 16-bit data window.
This higher data density can only be transported with a wider window, ie higher bit rate than the 16 bit of the source data.
For example, with the SM5842 digital filter, this data window width is 32 bits.
The digital filter rounds off the output to 20 bits and generates further errors when re-quantizing.
Such errors can generally be excluded only by maintaining the original 16-bit data window width.
Thus we are unable to transport the higher data volume, even if we had intended, in the original data window width of 16 bits.
A 16 bit without oversampling is always more accurate than an oversampling and high bit.
Image 4 - Continuation of the image noise
What will happen without oversampling?
Theoretically, the image noise will be repeated steplessly to higher frequencies (Image 4).
A conventional answer to that would be that does not sound nice.
And, to remove the noise, steep-angle analog filters with other disadvantages should follow the analog converter.
Is that so?
One challenge would be to address the Shannon theorem on the transmission of information.
But I think the perception of information is more important.
The limitation of our sense of hearing is a powerful low-pass filter and the Shannon theorem can confidently settle for it.
I can not hear good sound about theories and oscilloscopes.
One may think that frequencies that people do not hear need to be filtered.
However, even 8x oversampling filters can only cut frequencies between 22.05kHz and 330kHz.
Everything above 330kHz remains untouched, so it is also here hope for a positive result.
I share that hope, it will not be a problem.
A problem is only due to the concomitant effects of these filters.
Digital Filter Problems
Image 5 shows the principle of FIR digital filters.
The "T" stands for a delay at each sampling interval, "a" is for the coefficient multiplier and "+" is the adder.
After delaying the input data and multiplying by the coefficient, this process is repeated "n" times.
Where "n" is the number of ways. The more ways the filter has, the higher its performance should be.
The delay is not a calculation of time, but rather the waiting time of the data.
T: delay circuit for each sampling interval
a: coefficient multiplier
It is not easy to instinctively understand this diagram.
The process becomes a bit clearer in image 6.
The delay acts like a time delay of the sound.
The accumulated delay for each path is 1.92msec, 0.16msec and 0.05msec. Altogether it is 2.13msec.
Our sense of hearing can perform a frequency analysis every 2msec. A delay of 2.13msec can be heard by our ear.
The noticeable distance between the front and rear speakers is acoustically 73.3cm.
Can you now imagine what kind of music is produced in this way?
All sounds from the speakers will gather together in time to a small echo.
Image 7 shows the arrangement of the recording microphones.
If you have always felt that there was something missing in the sound of a digital recording, please check this presentation carefully.
So recording and mixing everything makes no sense.
Just as nonsensical is the recording with just a microphone when using a digital filter.
A digital filter also theoretically only made sense if its delay was significantly less than the noticeable 2msec.