Category Archives: Technical insights

When Will Hybrid Beamforming Disappear?

There has been a lot of fuss about hybrid analog-digital beamforming in the development of 5G. Strangely, it is not because of this technology’s merits but rather due to general disbelief in the telecom industry’s ability to build fully digital transceivers in frequency bands above 6 GHz. I find this rather odd; we are living in a society that becomes increasingly digitalized, with everything changing from being analog to digital. Why would the wireless technology suddenly move in the opposite direction?

When Marzetta published his seminal Massive MIMO paper in 2010, the idea of having an array with a hundred or more fully digital antennas was considered science fiction, or at least prohibitively costly and power consuming. Today, we know that Massive MIMO is actually a pre-5G technology, with 64-antenna systems already deployed in LTE systems operating below 6 GHz. These antenna panels are very commercially competitive; 95% of the base stations that Huawei are currently selling have at least 32 antennas. The fast technological development demonstrates that the initial skepticism against Massive MIMO was based on misconceptions rather than fundamental facts.

In the same way, there is nothing fundamental that prevents the development of fully digital transceivers in mmWave bands, but it is only a matter of time before such transceivers are developed and will dominate the market. With digital beamforming, we can get rid of the complicated beam-searching and beam-tracking algorithms that have been developed over the past five years and achieve a simpler and more reliable system operation, particularly, using TDD operation and reciprocity-based beamforming.

Figure 1: Photo of the experimental equipment with 24 digital transceivers that was used by NEC. It uses 300 MHz of bandwidth in the 28 GHz band.

I didn’t jump onto the hybrid beamforming research train since it already had many passengers and I thought that this research topic would become irrelevant after 5-10 years. But I was wrong – it now seems that the digital solutions will be released much earlier than I thought. At the 2018 European Microwave Conference, NEC Cooperation presented an experimental verification of an active antenna system (AAS) for the 28 GHz band with 24 fully digital transceiver chains. The design is modular and consists of 24 horizontally stacked antennas, which means that the same design could be used for even larger arrays.

Tomoya Kaneko, Chief Advanced Technologist for RF Technologies Development at NEC, told me that they target to release a fully digital AAS in just a few years. So maybe hybrid analog-digital beamforming will be replaced by digital beamforming already in the beginning of the 5G mmWave deployments?

Figure 2: Illustration of what is found inside the AAS box in Figure 1. There are 12 horizontal cards, with two antennas and transceivers each. The dimensions are 308 mm x 199 mm.

That said, I think that the hybrid beamforming algorithms will have new roles to play in the future. The first generations of new communication systems might reach faster to the market by using a hybrid analog-digital architecture, which require hybrid beamforming, than waiting for the fully digital implementation to be finalized. This could, for example, be the case for holographic beamforming or MIMO systems operating in the sub-THz bands. There will also remain to exist non-mobile point-to-point communication systems with line-of-sight channels (e.g., satellite communications) where analog solutions are quite enough to achieve all the necessary performance gains that MIMO can provide.

Is It Time to Forget About Antenna Selection?

Channel fading has always been a limiting factor in wireless communications, which is why various diversity schemes have been developed to combat fading (and other channel impairments). The basic idea is to obtain many “independent” observations of the channel and exploit that it is unlikely that all of these observations are subject to deep fade in parallel. These observations can be obtained over time, frequency, space, polarization, etc.

Only one antenna is used at a time when using antenna selection.

Antenna selection is the basic form of space diversity. Suppose a base station (BS) equipped with multiple antennas applies antenna selection. In the uplink, the BS only uses the antenna that currently gives the highest signal-to-interference-and-noise ratio (SINR). In the downlink, the BS only transmits from the antenna that currently has the highest SINR. As the user moves around, the fading changes and we, therefore, need to reselect which antenna to use.

The term antenna selection diversity can be traced back to the 1980s, but this diversity scheme was analyzed already in the 1950s. One well-cited paper from that time is Linear Diversity Combining Techniques by D. G. Brennan. This paper demonstrates mathematically and numerically that selection diversity is suboptimal, while the scheme called maximum-ratio combining (MRC) always provides higher SINR. Hence, instead of only selecting one antenna, it is preferable for the BS to coherently combine the signals from/to all the antennas to maximize the SINR. When the MRC scheme is applied in Massive MIMO with a very large number of antennas, we often talk about channel hardening but this is nothing but an extreme form of space diversity that almost entirely removes the fading effect.

Even if the suboptimality of selection diversity has been known for 60 years, the antenna selection concept has continued to be analyzed in the MIMO literature and recently also in the context of Massive MIMO. Many recent papers are considering a generalization of the scheme that is known as antenna subset selection, where a subset of the antennas is selected and then MRC is applied using only these ones.

Why use antenna selection?

A common motivation for using antenna selection is that it would be too expensive to equip every antenna with a dedicated transceiver chain in Massive MIMO, therefore we need to sacrifice some of the performance to achieve a feasible implementation. This is a misleading motivation since Massive MIMO capable base stations have already been developed and commercially deployed. I think a better motivation would be that we can save power by only using a subset of the antennas at a time, particularly, when the traffic load is below the maximum system capacity so we don’t need to compromise with the users’ throughput.

The recent papers [1], [2], [3] on the topic consider narrowband MIMO channels. In contrast, Massive MIMO will in practice be used in wideband systems where the channel fading is different between subcarriers. That means that one antenna will give the highest SINR on one subcarrier, while another antenna will give the highest SINR on another subcarrier. If we apply the antenna selection principle on a per-subcarrier basis in a wideband OFDM system with thousands of subcarriers, we will probably use all the antennas on at least one of the subcarrier. Consequently, we cannot turn off any of the antennas and the power saving benefits are lost.

We can instead apply the antenna selection scheme based on the average received power over all the subcarriers, but most channel models assume that this average power is the same for every base station antenna (this applies to both i.i.d. fading and correlated fading models, such as the one-ring model). That means that if we want to turn off antennas, we can select them randomly since all random selections will be (almost) equally good, and there are no selection diversity gains to be harvested.

This is why we can forget about antenna selection diversity in Massive MIMO!

It is only when the average channel gain is different among the antennas that antenna subset selection diversity might have a role to play. In that case, the antenna selection is governed by variations in the large-scale fading instead of variations in the small-scale fading, as conventionally assumed. This paper takes a step in that direction. I think this is the only case of antenna (subset) selection that might deserve further attention, while in general, it is a concept that can be forgotten.

Five Promising Research Directions for Antenna Arrays

Ever since I finished the writing of the book Massive MIMO Networks: Spectral, Energy, and Hardware Efficiency, I have felt that I’m somewhat done with my research on conventional Massive MIMO. The spectral efficiency, energy efficiency, resource allocation, and pilot contamination phenomenon are well understood by now. This is not a bad thing—as researchers, we are supposed to solve the problems we are analyzing. But it means that this is a good time to look for new research directions. It should preferably be something where we can utilize our skills as Massive MIMO researchers to do something new and exciting!

With this in mind, I gathered a team consisting of myself, Luca Sanguinetti, Henk Wymeersch, Jakob Hoydis, and Thomas L. Marzetta. Each one of us has written about one promising new direction of research related to antenna arrays and MIMO, including the background of the topic, our long-term vision, and pertinent open problem. This resulted in the paper:

Massive MIMO is a Reality – What is Next? Five Promising Research Directions for Antenna Arrays

You can find the preprint on arXiv.org or by clicking on the name of the paper. I hope that you will find it as interesting to read as it was for us to write!

Efficient DSP and Circuit Architectures for Massive MIMO: State-of-the-Art and Future Directions

Come listen to Liesbet Van der Perre, Professor at KU Leuven (Belgium) on Monday February 18 at 2.00 pm EST.

She gives a webinar on state-of-the-art circuit implementations of Massive MIMO, and outlines future research challenges. The webinar is based on, among others, this paper.

In more detail the webinar will summarize the fundamental technical contributions to efficient digital signal processing for Massive MIMO. The opportunities and constraints on operating on low-complexity RF and analog hardware chains are clarified. It will explain how terminals can benefit from improved energy efficiency. The status of technology and real-life prototypes will be discussed. Open challenges and directions for future research are suggested.

Listen to the webinar by following this link.

Could chip-scale atomic clocks revolutionize wireless access?

This chip-scale atomic clock (CSAC) device, developed by Microsemi, brings atomic clock timing accuracy (see the specs available in the link) in a volume comparable to a matchbox, and 120 mW power consumption.  This is way too much for a handheld gadget, but undoubtedly negligible for any fixed installation powered from the grid.  An alternative to synchronization through GNSS that works anywhere, including indoor in GNSS-denied environments.

I haven’t seen a list price, and I don’t know how much exotic metals and what licensing costs that its manufacture requires, but let’s ponder the possibility that a CSAC could be manufactured for the mass-market for a few dollars each. What new applications would then become viable in wireless?

The answer is mostly (or entirely) speculation. One potential application that might become more practical is positioning using distributed arrays.  Another is distributed multipair relaying. Here and here are some specific ideas that are communication-theoretically beautiful, and probably powerful, but that seem to be perceived as unrealistic because of synchronization requirements. Perhaps CoMP and distributed MIMO, a.k.a. “cell-free Massive MIMO”, applications could also benefit.

Other applications might be applications for example in IoT, where a device only sporadically transmits information and wants to stay synchronized (perhaps there is no downlink, hence no way of reliably obtaining synchronization information).  If a timing offset (or frequency offset for that matter) is unknown but constant over a very long time, it may be treated as a deterministic unknown and estimated. The difficulty with unknown time and frequency offsets is not their existence per se, but the fact that they change quickly over time.

It’s often said (and true) that the “low” speed of light is the main limiting factor in wireless.  (Because channel state information is the main limiting factor of wireless communications.  If light were faster, then channel coherence would be longer, so acquiring channel state information would be easier.) But maybe the unavailability of a ubiquitous, reliable time reference is another, almost as important, limiting factor. Can CSAC technology change that?  I don’t know, but perhaps we ought to take a closer look.

Beamforming From Distributed Arrays

When an antenna array is used to focus a transmitted signal on a receiver, we call this beamforming (or precoding) and we usually illustrate it as shown to the right. This cartoonish illustration is only applicable when the antennas are gathered in a compact array and there is a line-of-sight channel to the receiver.

If we want to deploy very many antennas, as in Massive MIMO, it might be preferable to distribute the antennas over a larger area. One such deployment concept is called Cell-free Massive MIMO. The basic idea is to have many distributed antennas that are transmitting phase-coherently to the receiving user. In other words, the antennas’ signal components add constructively at the location of the user, just as when using a compact array for beamforming. It is therefore convenient to call it beamforming in both cases—algorithmically it is the same thing!

The question is: How can we illustrate the beamforming effect when using a distributed array?

The figure below shows how to do it. I consider a toy example with 80 star-marked antennas deployed along the sides of a square and these antennas are transmitting sinusoids with equal power, but different phases. The phases are selected to make the 80 sine-components phase-aligned at one particular point in space (where the receiving user is supposed to be):

Clearly, the “beamforming” from a distributed array does not give rise to a concentrated signal beam, but the signal amplification is confined to a small spatial region (where the color is blue and the values on the vertical axis are close to one). This is where the signal components from all the antennas are coherently combined. There are minor fluctuations in channel gain at other places, but the general trend is that the components are non-coherently combined everywhere except at the receiving user. (Roughly the same will happen in a rich multipath channel, even if a compact array is used for transmission.)

By looking at a two-dimensional version of the figure (see below), we can see that the coherent combination occurs in a circular region that is roughly half a wavelength in diameter. At the carrier frequencies used for cellular networks, this region will only be a few centimeters or millimeters wide. It is almost magical how this distributed array can amplify the signal at such a tiny spatial region! This spatial region is probably what the company Artemis is calling a personal cell (pCell) when marketing their distributed MIMO solution.

If you are into the details, you might wonder why I simulated a square region that is only a few wavelengths wide, and why the antenna spacing is only a quarter of a wavelength. This assumption was only made for illustrative purposes. If the physical antenna locations are fixed but we would reduce the wavelength, the size of the circular region will reduce and the ripples will be more frequent. Hence, we would need to compute the channel gain at many more spatial sample points to produce a smooth plot.

Reproduce the results: The code that was used to produce the plots can be downloaded from my GitHub.

Adaptive Beamforming and Antenna Arrays

Adaptive beamforming for wireless communications has a long history, with the modern research dating back to the 70s and 80s. There is even a paper from 1919 that describes the development of directive transatlantic communication practices that were developed during the First World War. Many of the beamforming methods that are considered today can be found already in the magazine paper Beamforming: A Versatile Approach to Spatial Filtering from 1988. Plenty of further work was carried out in the 90s and 00s, before the Massive MIMO paradigm.

I think it is fair to say that no fundamentally new beamforming methods have been developed in the Massive MIMO literature, but we have rather taken known methods and generalized them to take imperfect channel state information and other practical aspects into account. And then we have developed rigorous ways to quantify the achievable rates that these beamforming methods achieve and studied the asymptotic behaviors when having many antennas. Closed-form expressions are available in some special cases, while Monte Carlo simulations can be used to compute these expressions in other cases.

As beamforming has evolved from an analog phased-array concept, where angular beams are studied, to a digital concept where the beamforming is represented in multi-dimensional vector spaces, it easy to forget the basic properties of array processing. That is why we dedicated Section 7.4 in Massive MIMO Networks to describe how the physical beam width and spatial resolution depend on the array geometry.

In particular, I’ve observed a lot of confusion about the dimensionality of MIMO arrays, which are probably rooted in the confusion around the difference between an antenna (which is something connected to an RF chain) and a radiating element. I explained this in detail in a previous blog post and then exemplified it based on a recent press release. I have also recorded the following video to visually explain these basic properties:

A recent white paper from Ericsson is also providing a good description of these concepts, particularly focused on how an array with a given geometry can be implemented with different numbers of RF chains (i.e., different numbers of antennas) depending on the deployment scenario. While having as many antennas as radiating element is preferable from a performance perspective, but the Ericsson researchers are arguing that one can get away with fewer antennas in the vertical direction in deployments where it is anyway very hard to resolve users in the elevation dimension.