Category Archives: Beyond 5G

Reconfigurable Intelligent Surfaces: The Resurrection of Relaying

When I started my research career in 2007, relaying was a very popular topic. It was part of the broader area of cooperative communications, where the communication between the transmitter and the receiver is aided by other nodes that are located in between. This could be anything between a transparent relay that retransmits the signal that reaches it after amplification in the analog domain (so-called amplify-and-forward), to a regenerative relay to processes and optimizes the signal in the digital baseband before retransmission.

There is a large number of different relaying protocols and information theory that underpins the technology. Relaying is supported by many wireless standards but has not become a major commercial success, possibly because the deployment of “pico-cells” is more attractive to network operators looking for improved local-area coverage.

Is relaying is a technology whose time has come?

A resurrection of the relaying topic can be observed in the beyond 5G era. Many researchers are considering a particular kind of relays being called a reconfigurable intelligent surface (RIS), intelligent reflecting surface, or software-controlled metasurface. Despite the different names and repeated claims of RIS being something fundamentally new, it is clearly a relaying technology. An RIS is a node that receives the signal from the transmitter and then re-radiates it with controllable time-delays. An RIS consists of many small elements that can be assigned different time-delays and thereby synthesize the scattering behavior of an arbitrarily-shaped object of the same size. This feature can, for instance, be used to beamform the signal towards the receiver as shown in the figure below.

Using the conventional terminology, an RIS is a full-duplex transparent relay since the signals are processed in the analog domain and the surface can receive and re-transmit waves simultaneously. The protocol resembles classical amplify-and-forward, except that the signals are not amplified. The main idea is instead to have a very large surface area so it can then capture an unusually large fraction of the signal power and use the large aperture to re-radiate narrow beams.

Conventional full-duplex relays suffer from loop-back interference, where the amplified signals leak into the yet-to-be-amplified signals in the relay. This issue is avoided in the RIS technology but is replaced by several other fundamental research challenges. In our new paper “Reconfigurable Intelligent Surfaces: Three Myths and Two Critical Questions“, we are pointing out the two most burning research questions that must be answered. We are also debunking three myths surrounding the RIS, whereof one is related to relaying.

I have also recorded a YouTube video explaining the fundamentals:

Many Applications for Correlated Fading Models

The channel fading in traditional frequency bands (below 6 GHz) is often well described by the Rayleigh fading model, at least in non-line-of-sight scenarios. This model says that the channel coefficient between any transmit antenna and receive antenna is complex Gaussian distributed, so that its magnitude is Rayleigh distributed.

If there are multiple antennas at the transmitter and/or receiver, then it is common to assume that different pairs of antennas are subject to independent and identically distributed (i.i.d.) fading. This model is called i.i.d. Rayleigh fading and dominates the academic literature to such an extent that one might get the impression that it is the standard case in practice. However, it is rather the other way around: i.i.d. Rayleigh fading only occurs in very special cases in practice, such as having a uniform linear array with half-wavelength-spaced isotropic antennas that is deployed in a rich scattering environment where the multi-paths are uniformly distributed in all directions. If one would remove any of these very specific assumptions then the channel coefficients will become mutually correlated. I covered the basics of spatial correlation in a previous blog post.

In reality, the channel fading will always be correlated

Some reasons for this are: 1) planar arrays exhibit correlation along the diagonals, since not all adjacent antennas can be half-wavelength-spaced; 2) the antennas have non-isotropic radiation patterns; 3) there will be more multipath components from some directions than from other directions.

With this in mind, I have dedicated a lot of my work to analyzing MIMO communications with correlated Rayleigh fading. In particular, our book “Massive MIMO networks” presents a framework for analyzing multi-user MIMO systems that are subject to correlated fading. When we started the writing, I thought spatial correlation was a phenomenon that was important to cover to match reality but would have a limited impact on the end results. I have later understood that spatial correlation is fundamental to understand how communication systems work. In particular, the modeling of spatial correlation changes the game when it comes to pilot contamination: it is an entirely negative effect under i.i.d. Rayleigh fading models, while a proper analysis based on spatial correlation reveals that one can sometimes benefit from purposely assigning the same pilots to users and then separate them based on their spatial correlation properties.

Future applications for spatial correlation models

The book “Massive MIMO networks” presents a framework for channel estimation and computation of achievable rates with uplink receive combining and downlink precoding for correlated Rayleigh fading channels. Although the title of the book implies that it is about Massive MIMO, the results apply to many beyond-5G research topics. Let me give two concrete examples:

  1. Cell-free Massive MIMO: In this topic, many geographically distributed access points are jointly serving all the users in the network. This is essentially a single-cell Massive MIMO system where the access points can be viewed as the distributed antennas of a single base station. The channel estimation and computation of achievable rates can be carried out as described “Massive MIMO networks”. The key differences are instead related to which precoding/combining schemes that are considered practically implementable and the reuse of pilots within a cell (which is possible thanks to the strong spatial correlation).
  2. Extremely Large Aperture Arrays: There are other names for this category, such as holographic MIMO, large intelligent surfaces, and ultra-massive MIMO. The new terminologies are used to indicate the use of co-located arrays that are much larger (in terms of the number of antennas and the physical size) than what is currently considered by the industry when implementing Massive MIMO in 5G. In this case, the spatial correlation matrices must be computed differently than described in “Massive MIMO networks”, for example, to take near-field effects and shadow fading variations into consideration. However, once the spatial correlation matrices have been computed, then the same framework for channel estimation and computation of achievable rates is applicable.

The bottom line is that we can analyze many new exciting beyond-5G technologies by making use of the analytical frameworks developed in the past decade. There is no need to reinvent the wheel but we should reuse as much as possible from previous research and then focus on the novel components. Spatial correlation is something that we know how to deal with and this must not be forgotten.

Record 5G capacity via software upgrade!

In the news: Nokia delivers record 5G capacity gains through a software upgrade.   No surprise!  We expected, years ago, this would happen.

What does this software upgrade consist of?  I can only speculate.  It is, in all likelihood, more than the usual (and endless) operating system bugfixes we habitually think of as “software upgrades”.   Could it be even something that goes to the core of what massive MIMO is?  Replacing eigen-beamforming with true reciprocity-based beamforming?! Who knows. Replacing maximum-ratio processing with zero-forcing combining?!  Or even more mind-boggling, implementing more sophisticated processing of the sort that has been stuffing the academic journals in the last years? We don’t know!  But it will certainly be interesting to find out at some point, and it seems safe to assume that this race will continue.  

A lot of improvement could be achieved over the baseline canonical massive MIMO processing. One could, for example, exploit fading correlation, develop improved power control algorithms or implement algorithms that learn the propagation environment, autonomously adapt, and predict the channels.  

It might seem that research already squeezed every drop out of the physical layer, but I do not think so.  Huge gains likely remain to be harvested when resources are tight, and especially we are limited by coherence: high carriers means short coherence, and high mobility might mean almost no coherence at all.  When the system is starved of coherence, then even winning a couple of samples on the pilot channel means a lot.  Room for new elegant theory in “closed form”?  Good question. Could sound heartbreaking, but maybe we have to give up on that.  Room for useful algorithms and innovation? Certainly yes.  A lot.  The race will continue.

Towards 6G: Massive MIMO is a Reality—What is Next?

The good side of the social distancing that is currently taking place is that I have spent more time recording video material than usual. For example, I was supposed to give a tutorial entitled “Signal Processing for MIMO Communications Beyond 5G” at ICASSP 2020 in Barcelona in the beginning of May. This conference has now turned into an online event with free registration. Hence, anyone can attend the tutorial that I am giving together with Jiayi Zhang from Beijing Jiaotong University. We have prerecorded the presentations, which will be broadcasted to the conference attendees on May 4, but will be available for live discussions in between each video segment.

As a teaser for this tutorial, I have uploaded the 30 minute introduction to YouTube:

In this video, I explain what Massive MIMO is, what role it plays in 5G, why there will be a large gap between the theoretical performance and what is achieved in practice, and what might come next. In particular, I explain my philosophy regarding 6G research.

The remaining 2.5 hours of the tutorial will only be available from ICASSP. I hope to meet you online on May 4!

Infinitely Large Arrays?

A defining factor for the early Massive MIMO literature is the study of communication systems where the number of base station antennas M goes to infinity. Marzetta’s original paper considered this asymptotic regime from the outset. Many other papers derived rate expressions for a finite M value, and then studied their behavior as M\to \infty . In these papers, the signal-to-noise ratio (SNR) grows linearly with M, and the rate grows towards infinity as \log_2(M) (except when pilot contamination is a limiting factor).

I have carried out such asymptotic analysis myself, but there is one issue that has been disturbing me for a while: The SNR cannot grow without bound in practice because we can never receive more power than what was transmitted from the base station. It doesn’t matter how many transmit antennas that are used or how razor-sharp the beams become, the law of conservation of energy must be respected. So where is the error in the analysis?

The problem is not that M\to \infty implies the use of infinitely large arrays. If we accept that the universe is infinite, it is plausible to create an M-antenna planar array for any value of M, for example, using a \sqrt{M} \times \sqrt{M} grid. Such an array is illustrated in Figure 1.

Figure 1: A planar array with 9 x 9 antennas are used to communicate with a user device.

The actual issue is how the channel gains (pathlosses) between the antennas and the user are modeled. We are used to considering channel models based on far-field approximations, where the channel gain is the same for all antennas (when averaging over small-scale fading). However, as the size of the array grows, the approximate far-field models become inaccurate. Instead, one needs to model the following phenomena that appear in the near-field:

  1. The propagation distance is different between each base station antenna and the user.
  2. The effective antenna areas vary in the array, even if the physical areas are equal, since the antennas are observed from different angles.
  3. The polarization losses vary between the antennas, because of the different angles from the antennas to the user.

It might seem complicated to take these phenomena into account, but the following paper shows how to compute the channel gain exactly when the user is centered in front of the array. I generalized this formula to non-centered users in a new paper. We utilized the new result to study the asymptotic behaviors of Massive MIMO and also intelligent reflecting surfaces. It turned out that all the three aforementioned phenomena are important to get accurate asymptotic results. When transmitting from an isotropic antenna to a planar Massive MIMO array, the total channel gain converges to 1/3 and instead of going to infinity. The remaining 2/3 of the transmit power is lost due to polarization mismatch or radiation into the opposite direction of the array. If any of the first two phenomena are ignored, the channel gain will grow unboundedly as M\to \infty , which is physically impossible. If the last one is ignored, the asymptotic channel gain is overestimated by 50%, so this is the least critical factor.

Although the exact channel model can be used to accurately predict the asymptotic SNR behavior, my experience from studying this topic is that the far-field approximations are accurate in most cases of practical interest. It is first when the propagation distance is shorter than the side length of the array that the near-field phenomena are critically important. In other words, the scaling laws that have been obtained in the Massive MIMO literature are usually applicable in practice, even if they break down asymptotically.

Two Roles of Deep Learning in Massive MIMO

The hype around machine learning, particularly deep learning, has spread over the world. It is not only affecting engineers but also philosophers and government agencies, which try to predict what implications machine learning will have on our society.

When the initial excitement has subsided, I think machine learning will be an important tool that many engineers will find useful, alongside more classical tools such as optimization theory and Fourier analysis. I have spent the last two years thinking about what role deep learning can have in the field of communications. This field is rather different from other areas where deep learning has been successful: We deal with man-made systems that have designed based on rigorous theory to operate close to the fundamental performance limits, for example, the Shannon capacity. Hence, at first sight, there seems to be little room for improvement.

I have nevertheless identified two main applications of supervised deep learning in the physical layer of communication systems: 1) algorithmic approximation and 2) function inversion.

You can read about them in my recent survey paper “Two Applications of Deep Learning in the Physical Layer of Communication Systems” or watch the following video:

In the video, I’m exemplifying the applications through two recent papers where we applied deep learning to improve Massive MIMO systems. Here are links to those papers:

Trinh Van Chien, Emil Björnson, Erik G. Larsson, “Sum Spectral Efficiency Maximization in Massive MIMO Systems: Benefits from Deep Learning,” IEEE International Conference on Communications (ICC), 2019.

Özlem Tugfe Demir, Emil Björnson, “Channel Estimation in Massive MIMO under Hardware Non-Linearities: Bayesian Methods versus Deep Learning,” IEEE Open Journal of the Communications Society, 2020.

Intelligent Reflecting Surfaces: On Use Cases and Path Loss Model

Emerging intelligent reflecting surface (IRS) technology, also known under the names “reconfigurable intelligent surface” and “software-controlled metasurface”, is sometimes marketed as an enabling technology for 6G. How do they work, what are their use cases and how much will they improve wireless access performance at large?

The physical principle of an IRS is that the surface is composed of N atoms, each of which acts as an “intelligent” scatterer: a small antenna that receives and re-radiates without amplification, but with a controllable phase-shift. Typically, an atom is implemented as a small patch antenna terminated with an adjustable impedance. Assuming the phase shifts are properly adjusted, the N scattered wavefronts can be made to add up constructively at the receiver. If coupling between the atoms is neglected, the analysis of an IRS essentially entails (i) finding the Green’s function of the source (a sum of spherical waves if close, or a plane wave if far away), (ii) computing the impinging field at each atom, (iii) integrating this field over the surface of each atom to find a current density, (iv) computing the radiated field from each atom using physical-optics approximation, and (v) applying the superposition principle to find the field at the receiver. If the atoms are electrically small, one can approximate the re-radiated field by pretending the atoms are point sources and then the received “signal” is basically a superposition of phase-shifted (as  e^{jkr}), amplitude-scaled (as 1/r) source signals.

A point worth re-iterating is that an atom is a scatterer, not a “mirror”. A more subtle point is that the entire IRS as such, consisting of a collection of scatterers, is itself also a scatterer, not a mirror. “Mirrors” exist only in textbooks, when a plane wave is impinging onto an infinitely large conducting plate (none of which exist in practice). Irrespective of how the IRS is constructed, if it is viewed from far enough away, its radiated field will have a beamwidth that is inversely proportional to its size measured in wavelengths.

Two different operating regimes of IRSs can be distinguished:

1. Both transmitter and receiver are in the far-field of the surface. Then the waves seen at the surface can be approximated as planar; the phase differential from the surface center to its edge is less than a few degrees, say. In this case the phase shifts applied to each atom should be linear in the surface coordinate. The foreseen use case would be to improve coverage, or provide an extra path to improve the rank of a point-to-point MIMO channel. Unfortunately in this case the transmitter-IRS-path loss scales very unfavorably, as (N/(r_1r_2))^2 where N is the number of meta-atoms in the surface, and the reason is that again, the IRS itself acts as a (large) scatterer, not a “mirror”. Therefore the IRS has to be quite large before it becomes competitive with a standard single-antenna decode-and-forward relay, a simple, well understood technology that can be implemented using already widely available components, at small power consumption and with a small form factor. (In addition, full-duplex technology is emerging and may eventually be combined with relaying, or even massive MIMO relaying.)

2. At least one of the transmitter and the receiver is in the surface near-field. Here the plane-wave approximation is no longer valid. The IRS can then either be sub-optimally configured to act as a “mirror”, in which case the phase shifts vary linearly as function of the surface coordinate. Alternatively, it can be configured to act as a “lens”, with optimized phase-shifts, which is typically better. As shown for example in this paper, in the near-field case the path loss scales more favorably than in the far-field case. The use cases for the near-field case are less obvious, but one can think of perhaps indoor environments with users close to the walls and every wall covered by an IRS. Another potential use case that I learned about recently is to use the IRS as a MIMO transmitter: a single-antenna transmitter near an IRS can be jointly configured to act as a MIMO beamforming array.

So how useful will IRS technology be in 6G? The question seems open. Indoor coverage in niche scenarios, but isn’t this an already solved problem? Outdoor coverage improvement, but then (cell-free) massive MIMO seems to be a much better option? Building MIMO transmitters from a single-antenna seems exciting, but is it better than using conventional RF? Perhaps it is for the Terahertz bands, where implementation of coherent MIMO may prove truly challenging, that IRS technology will be most beneficial.

A final point is that nothing requires the atoms in an IRS to be located adjacently to one another, or even to form a surface! But they are probably easier to coordinate if they are in more or less the same place.