Making Cell-Free Massive MIMO Competitive

The paper “Making Cell-Free Massive MIMO Competitive with MMSE Processing and Centralized Implementation” that I’ve authored together with Luca Sanguinetti has been awarded the 2022 IEEE Marconi Prize Paper Award in Wireless Communications. This is a great honor that places the paper on the same list as many seminal papers published in the IEEE Transactions on Wireless Communications.

I will take this opportunity to elaborate on five things that I learned while writing the paper. The basic premise is that we analyze the uplink of a system with many distributed access points (APs) that serve a collection of user devices at the same time and frequency. We compared the data rates that can be achieved depending on how deeply the APs are collaborating, from Level 1 (cellular network with no cooperation) to Level 4 (cell-free network with centralized computations based on complete signal knowledge). We also compared maximum ratio (MR) processing of the received signals with local and centralized forms of minimum mean-squared error (MMSE) processing.

I learned the following five things:

  1. MMSE processing always outperforms MR processing. This might seem obvious, since the former scheme can suppress interference, but the really surprising thing was that the performance difference is large even for single-antenna APs that operate distributively. The reason is that MMSE processing provides much more channel hardening.
  2. Distributed MR processing is the worst-case scenario. Many of the early works on cell-free massive MIMO assumed distributed MR processing and focused on developing advanced power control algorithms. We demonstrated that one can achieve better performance with MMSE processing and rudimentary power control; thus, when designing a cell-free system, the choice of processing scheme is of primary importance, while the choice of power control is secondary.
  3. Linear uplink processing is nearly optimal. In a fully centralized implementation, it is possible to implement non-linear processing schemes for signal detection; in particular, successive interference cancellation could be used. We showed that this approach only increases the sum rate by a few percent, which isn’t enough to motivate the increased complexity. The reason is that we seldom have any strong interfering signals, just many weakly interfering signals.
  4. Distributed processing increases fronthaul signaling. Since the received signals are distributed over the APs, it might seem logical that one can reduce the fronthaul signaling by also doing parts of the processing distributively. This is not the case in the intended operating regime of cell-free massive MIMO, where each AP serves more or equally many users than it has antennas. In this case, fewer parameters need to be sent over the fronthaul when making a centralized implementation!
  5. Max-min fairness is a terrible performance goal. While a key motivation behind cell-free massive MIMO is to even out the performance variations in the system, compared to cellular networks, we shouldn’t strive for exact uniformity. To put it differently, the user with the worst channel conditions in the country shouldn’t dictate the performance of everyone else! Several early works on the topic focused on max-min fairness optimization and showed very promising simulation results, but when I attempted to reproduce these results, I noticed that they were obtained by terminating the optimization algorithms long before the max-min fairness solution was found. This indicates that we need a performance goal based on relative fairness (proportional fairness?) instead of the overly strict max-min fairness goal.

Since the paper was written in 2019, I have treated centralized MMSE processing as the golden standard for cell-free massive MIMO. I have continued looking for ways to reduce the fronthaul signaling while making use of distributed computational resources (that likely will be available in practice). I will mention two recent papers in this direction. The first is “MMSE-Optimal Sequential Processing for Cell-Free Massive MIMO With Radio Stripes“, which shows that one can implement centralized MMSE processing in a distributed/sequential manner, if the fronthaul is sequential. The paper “Team MMSE Precoding with Applications to Cell-free Massive MIMO” develops a methodology for dealing with the corresponding downlink problem, which is more challenging due to power and causality constraints.

Finally, let me thank IEEE ComSoc for not only giving us the Marconi Prize Paper Award but also producing the following nice video about the paper:

Episode 28: Ultra-Reliable Low-Latency Communication (With Petar Popovski)

We have now released the 28th episode of the podcast Wireless Future. It has the following abstract:

The reliability of an application is determined by its weakest link, which often is the wireless link. Channel coding and retransmissions are traditionally used to enhance reliability but at the cost of extra latency. 5G promises to enhance both reliability and latency in a new operational mode called ultra-reliable low-latency communication (URLLC). In this episode, Erik G. Larsson and Emil Björnson discuss URLLC with Petar Popovski, Professor at Aalborg University, Denmark. The conversation pinpoints the physical reasons for latency and unreliability, and viable solutions related to network deployment, diversity, digital vs. analog communications, non-orthogonal network slicing, and machine learning. Further details can be found in the article “Wireless Access in Ultra-Reliable Low-Latency Communication (URLLC)” and its companion video

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

Episode 27: Open Air Interface (With Florian Kaltenberger)

We have now released the 27th episode of the podcast Wireless Future. It has the following abstract:

Mobile network technology builds on open standards, but it is nevertheless a major effort to implement the required software protocols and interface them with actual hardware. Many algorithmic choices must also be made in the implementation, leading to each vendor having its proprietary solution. The OpenAirInterface Alliance wants to change the game by providing open-source software implementations of the wireless air interface and core network. In this episode, Emil Björnson and Erik G. Larsson are discussing these prospects with a Board Member of the Alliance: Florian Kaltenberger, Associate Professor at EURECOM, France. The conversation covers the fundamentals of air interfaces, how anyone can build a 5G network using their open-source software and off-the-shelf hardware, and the pros and cons of implementing everything in software. The connections to Open RAN, functional splits, and patent licenses are also discussed. Further details can be found at https://openairinterface.org and in the paper “OpenAirInterface: Democratizing innovation in the 5G Era”.

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

Episode 26: Network Slicing

We have now released the 26th episode of the podcast Wireless Future. It has the following abstract:

In the near future, we will be able to deploy new wireless networks without installing new physical infrastructure. The networks will instead be virtualized on shared hardware using the new concept of network slicing. This will enable tailored wireless services for businesses, entertainment, and devices with special demands. In this episode, Erik G. Larsson and Emil Björnson discuss why we need multiple virtual networks, what the practical services might be, who will pay for it, and whether the concept might break net neutrality. The episode starts with a continued discussion on the usefulness of models, based on feedback from listeners regarding Episode 25. The network slicing topic starts after 10 minutes. 

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

Episode 25: What Models are Useful?

We have now released the 25th episode of the podcast Wireless Future. It has the following abstract:

The statistician George Box famously said that “All models are wrong, but some are useful”. In this episode, Emil Björnson and Erik G. Larsson discuss what models are useful in the context of wireless communications, and for what purposes. The conversation covers modeling of wireless propagation, noise, hardware, and wireless traffic. A key message is that the modeling requirements are different for algorithmic development and for performance evaluation.

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

Episode 24: Q&A With 5G and 6G Predictions

We have now released the 24nd episode of the podcast Wireless Future, which is a New Year’s special! It has the following abstract:

In this episode, Emil Björnson and Erik G. Larsson answer ten questions from the listeners. The common theme is predictions of how 5G will evolve and which technologies will be important in 6G. The specific questions: Will Moore’s law or Edholm’s law break down first? How important will integrated communication and sensing become? When will private 5G networks start to appear? Will reconfigurable intelligent surfaces be a key enabler of 6G? How can we manage the computational complexity in large-aperture Massive MIMO? Will machine learning be the game-changer in 6G? What is 5G Dynamic Spectrum Sharing? What does the convergence of the Shannon and Maxwell theories imply? What happened to device-to-device communications, is it an upcoming 5G feature? Will full-duplex radios be adopted in the future? If you have a question or idea for a future topic, please share it as a comment to the YouTube version of this episode.

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

Episode 23: Wireless Localization and Sensing (With Henk Wymeersch)

We have now released the 23nd episode of the podcast Wireless Future! It has the following abstract:

For each wireless generation, we are using more bandwidth and more antennas. While the primary reason is to increase the communication capacity, it also increases the network’s ability to localize objects and sense changes in the wireless environment. The localization and sensing applications impose entirely different requirements on the desired signal and channel properties than communications. To learn more about this, Emil Björnson and Erik G. Larsson have invited Henk Wymeersch, Professor at Chalmers University of Technology, Sweden. The conversation covers the fundamentals of wireless localization, the historical evolution, and future developments that might involve machine learning, terahertz bands, and reconfigurable intelligent surfaces. Further details can be found in the articles “Collaborative sensor network localization” and “Integration of communication and sensing in 6G”.

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

News – commentary – mythbusting