Increasing bit resolution with oversampling

What happens if, later in the design, you find out you need more resolution from your ADC? There are simple ways to do this. The post Increasing bit resolution with oversampling appeared first on EDN.

Increasing bit resolution with oversampling

Increasing ADC resolution

Many electronic designs contain an ADC, or more than one, to read various signals and voltages. Often, these ADCs are included as part of the microcontroller (MCU) being used. This means, once you pick your MCU, you have chosen the maximum resolution (calculated from the number of bits in the ADC and the reference) you will have for taking a reading.

Wow the engineering world with your unique design: Design Ideas Submission Guide

What happens if, later in the design, you find out you need slightly more resolution from the ADC? Not to worry, there are some simple ways to improve the resolution of the sampled data. I discussed one method in a previous EDN Design Idea (DI), “Adaptive resolution for ADCs,” which talked about changing the reference voltage, so I won’t discuss that here. Another way of improving the resolution is through the concept of oversampling.

FYI: The formal definition of oversampling is taking samples at a rate faster than the Nyquist rate. As the Nyquist rate is sampling at “just over” the frequency of interest, this is somewhat of a loose definition. Typically, engineers use 3 to 5 times the Nyquist rate for basic sampling, and they usually don’t consider that oversampling. 

A simple version of oversampling

Let’s first look at a method that is essentially a simplified version of oversampling…averaging. (Most embedded programmers have used averaging to improve their readings, sometimes with the thought of minimizing the effects of bad readings and not thinking about improving resolution.)

So, suppose you’re taking a temperature reading from a sensor once a second. Now, to get a better resolution of the temperature, take the reading every 500 ms and average the two readings together. This will give you another ½-bit of resolution (we’ll show the math later). Let’s go further—take readings every 250 ms and average four readings. This will give you a whole extra bit of resolution.

If you have an 8-bit ADC and it is scaled to read 0 to 255 degrees with 1-degree resolution, you will now have a virtual 9-bit ADC capable of returning readings of 0 to 255.5 degrees with 0.5-degree resolution. If you average 16 readings, you will create a virtual 10-bit ADC from your 8-bit ADC. The 64-averaged reading will create an 11-bit virtual ADC by improving your 8-bit ADC with three extra bits, thereby giving you a resolution of one part in 2048 (or, in the temperature sensor example, a resolution of about 0.12 degrees).

A formula for averaging

The formula for extra bits versus the number of samples averaged is:

Number of samples averaged = M
Number of virtual bits created = b
M = 4b
If you want to solve for b given M: b = log4(M)
Or, b = (1/ log2(4)) * log2(M) = log2(M)/2

You may be scratching your head, wondering where that formula comes from. First, let’s think about the readings we are averaging. They consist of two parts. The first is the true, clean reading the sensor is trying to give us. The second part is the noise that we pick up from extraneous signals on the wiring, power supplies, components, etc. (These two signal parts combine in an additive way.)

We will assume that this noise is Gaussian (statistically normally distributed; often shown as a bell curve; sometimes referred to as white noise) and uncorrelated to our sample rate. Now, when taking the average, we first sum up the readings. The clean readings from the sensor will obviously sum up in a typical mathematical way. In the noise part, though, the standard deviation of the sum is the square root of the sum of the standard deviations. In other words, the clean part increases linearly, and the noise part increases as the square root of the number of readings.

What this means is that not only is the resolution increased, but the signal-to-noise ratio (SNR) would improve by M/sqrt(M), which mathematically reduces to sqrt(M). In simpler terms, the averaged reading SNR improves by the square root of the number of samples averaged. So, if we take four readings, the average SNR improves by 2, or the equivalent of one more bit in the ADC (an 8-bit ADC performs as a 9-bit ADC).

This whole concept may look different, but you probably learned it in a science or stats class. There, they taught that you had to average 100 readings to get one more significant digit. It’s the same thing except you’re working in base 10 instead of base 2.

Averaging downsides

I have used averaging in many pieces of firmware, but it’s not always the best solution. As was said before, your sensor connection is passing your ADC a good signal with some noise added to it. Simple averaging is not always the best solution. One issue is the slow roll-off in the frequency domain. Also, the stopband attenuation is not very good. Both of these issues indicate that averaging allows a good portion of the noise to enter your signal. So, we may have increased the resolution of the reading, but have not removed all the noise from the signal we can.

Reducing the noise

To reduce this noise, that is spread over the full frequency spectrum coming down the sensor wire, you may want to apply an actual lowpass filter (LPF) to the signal. This can be done as a hardware LPF applied before the ADC or it can be a digital LPF applied after the ADC, or it can be both. (Oversampling makes the design of these filters easier as the roll-off can be less steep.)

There are many types of digital filters but the two major ones are the finite impulse response (FIR) and the infinite impulse response (IIR). I won’t go into the details of these filters here, but just say that these can be designed using tradeoffs of bandpass frequency, roll-off rate, ripple, phase shift, etc.

A more advanced approach to oversampling

So, let’s look at a design to create a more advanced oversampling system. Figure 1 shows a typical layout for a more “formal”, and better oversampling design. 

Figure 1 A typical oversampling block diagram with an antialiasing filter, ADC, digital LPF, and decimation (down-sampling).

We start by filtering the incoming signal with an analog hardware LPF (often referred to as an antialiasing filter). This filter is typically designed to filter the incoming desired signal at just above the frequency of interest.

Note: not shown, but a good design feature, is a flywheel circuit close to the ADC input. The capacitor in this RC circuit provides a charge to hold up the voltage level when the ADC samples the input. More on this can be found at: ADC Driver Ref Design Optimizing THD, Noise, and SNR for High Dynamic Rangea special callout to Mr. Rob Zeppetelle for tracking down this obscure flywheel circuit.

The ADC then samples the signal at a rate many times (M) the frequency of interest’s Nyquist rate. Then, in the system’s firmware, the incoming sample stream is again low-pass filtered with a digital filter (typically an FIR or IIR) to further remove the signal’s Gaussian noise as well as the quantization noise created during the ADC operation. (Various filter designs can also be useful for other kinds of noise, such as impulse noise, burst noise, etc.) Oversampling gave us the benefit of spreading the noise over the wide oversample bandwidth, and our digital lowpass filter can remove much of this.

Next, we decimate the signal’s data stream. Decimation (also known as down-sampling) is simply the act of now only using every 2nd, or 3rd, or 4th, up to every Mth sample, and tossing the rest. This is safe due to oversampling and the lowpass filters, so we won’t alias much noise into the lower sample rate signal. Decimation essentially reduces the bandwidth as represented by the remaining samples. Further processing now requires less processing power as the number of samples is significantly reduced.

It works

This stuff really works. I once worked on a design that required us to receive very small signals being transmitted on a power line (< 1 W). The signal was attenuated by capacitors on the lines, various transformers, and all the customer’s devices plugged into the powerline. The signal to be received was around 10 µV riding on the 240-VAC line. We ended up oversampling by around 75 million times the Nyquist rate and were able to successfully receive the transmissions at over 100 miles from the transmitter.

Damian Bonicatto is a consulting engineer with decades of experience in embedded hardware, firmware, and system design. He holds over 30 patents.

Phoenix Bonicatto is a freelance writer.

Related Content

The post Increasing bit resolution with oversampling appeared first on EDN.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow