Digital Pre-Distortion: Solving the Limitations of Traditional Amplifier Designs
Category Machine Learning Monday - May 13 2024, 00:19 UTC - 6 months ago Digital pre-distortion is a method of intentionally distorting signals to compensate for the non-linearities of an amplifier. It has become essential for achieving the high efficiency and linearity required for 5G networks, but traditional designs face challenges with cross-talk and power requirements. As technology evolves, engineers must continue to find new solutions to improve communication systems.
In the world around us, a quiet but very important evolution has been taking place in engineering over the last decades. As technology evolves, it becomes increasingly clear that building devices that are physically as close as possible to being perfect is not always the right approach. That's because it often leads to designs that are very expensive, complex to build, and power-hungry.
Engineers, especially electronic engineers, have become skilled in using highly imperfect devices in ways that allow them to behave close enough to the ideal case to be successfully applicable. Historically, a well-known example is that of disk drives, where advances in control systems have made it possible to achieve incredible densities while using electromechanical hardware littered with imperfections, such as nonlinearities and instabilities of various kinds.
A similar problem has been emerging for radio communication systems. As the carrier frequencies keep increasing and channel packing becomes more and more dense, the requirements in terms of linearity for the radio-frequency power amplifiers (RF-PAs) used in telecommunication systems have become more stringent. Traditionally, the best linearity is provided by designs known as "Class A," which sacrifice great amounts of power to maintain operation in a region where transistors respond in the most linear possible way.
On the other hand, highly energy-efficient designs are affected by nonlinearities that render them unstable without suitable correction. The situation has been getting worse because the modulation systems used by the latest cellular systems have a very high power ratio between the lowest- and highest-intensity symbols. Specific RF-PA types such as Doherty amplifiers are highly suitable and power-efficient, but their native non-linearity is not acceptable.
Over the last two decades, high-speed digital signal processing has become widely available, economical, and power-efficient, leading to the emergence of algorithms allowing the real-time correction of amplifier non-linearities through intentionally "distorting" the signal in a way that compensates the amplifier's physical response.
These algorithms have become collectively known as digital pre-distortion (DPD), and represent an evolution of earlier implementations of the same approach in the analog domain. Throughout the years, many types of DPD algorithms have been proposed, typically involving real-time feedback from the amplifier through a so-called "observation signal," and fairly intense calculations.
While this approach has been instrumental to the development of third- and fourth-generation cellular networks (3G, 4G), it falls short of the emerging requirements for fifth-generation (5G) networks, due to two reasons. First, dense antenna arrays are subject to significant disturbances between adjacent elements, known as cross-talking, making it difficult to obtain clean observation signals and causing instability.
The situation is made considerably worse by the use of ever-increasing frequencies. Second, dense arrays of antennas require very low-power solutions, and this is not compatible with the substantial real-time computation required by DPD algorithms as they currently exist.
Share