All posts by Emil Björnson

Episode 32: Information-Theoretic Foundations of 6G (With Giuseppe Caire)

The Wireless Future podcast is back with a new season. We have released the 32nd episode, which has the following abstract:

Information theory is the research discipline that establishes the fundamental limits for information transfer, storage, and processing. Major advances in wireless communications have often been a combination of information-theoretic predictions and engineering efforts that turn them into mainstream technology. Erik G. Larsson and Emil Björnson invited the information-theorist Giuseppe Caire, Professor at TU Berlin, to discuss how the discipline is shaping current and future wireless networks. The conversation first covers the journey from classical multiuser information theory to Massive MIMO technology in 5G. The rest of the episode goes through potential future developments that can be assessed through information theory: distributed MIMO, orthogonal time-frequency-space (OTFS) modulation, coded caching, reconfigurable intelligent surfaces, terahertz bands, and the use of ever larger numbers of antennas. The following papers are mentioned: “OTFS vs. OFDM in the Presence of Sparsity: A Fair Comparison”, “Joint Spatial Division and Multiplexing”, and “Massive MIMO has Unlimited Capacity”. 

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

Spatial Multiplexing in the Depth Domain

Humans can estimate the distance to objects using our eyesight. This perception of depth is enabled by the fact that our two eyes are separated to get different perspectives of the world. It is easier to determine the distance from our eyes to nearby objects than to things further away, because our visual inputs are then more different. The ability to estimate distances also varies between people, depending on the quality of their eyesight and ability to process the visual inputs in their brains.

An antenna array for wireless communications also has a depth perception; for example, we showed in a recent paper that if the receiver is at a distance from the transmitter that is closer than the Fraunhofer distance divided by 10, then the received signal is significantly different from one transmitted from further away. Hence, one can take D2/(5λ) as the maximum distance of the depth perception, where D is the aperture length of the array and λ is the wavelength. The derivation of this formula is based on evaluating when the spherical curvatures of the waves arriving from different distances are significantly different.

Our eyes are separated by roughly D=6 cm and the wavelength of visual light is roughly λ=600 nm. The formula above then says that the depth perception reaches up to 1200 m. In contrast, a typical 5G base station has an aperture length of D=1 m and operates at 3 GHz (λ=10 cm), which limits the depth perception to 2 m. This is why we seldom mention depth perception in wireless communications; the classical plane-wave approximation can be used when the depth variations are indistinguishable. However, the research community is now simultaneously considering the use of physically larger antenna arrays (larger D) and the utilization of higher frequency bands (smaller λ). For an array with length D=10 m and mmWave communications at λ=10 mm, the depth perception reaches up to 2000 m. I therefore believe that depth perception will become a standard feature in 6G and beyond.

There is a series of research papers that analyze this feature, often implicitly when mentioning terms such as the radiative near-field, finite-depth beamforming, extremely large aperture arrays, and holographic MIMO. If you are interested in learning more about this topic, I recommend our new book chapter “Near-Field Beamforming and Multiplexing Using Extremely Large Aperture Arrays“, authored by Parisa Ramezani and myself. We summarize the theory for how an antenna array can transmit a “beam” towards a nearby focal point so that the focusing effect vanishes both before and after that point. This feature is illustrated in the following figure:

This is not a physical curiosity but enables a large antenna array to simultaneously communicate with user devices located in the same direction but at different distances. The users just need to be at sufficiently different distances so that the focusing effect illustrated above can be applied to each user and result in roughly non-overlapping focal regions. We call this near-field multiplexing in the depth domain.

A less technical overview of this emerging topic can also be found in this video:

I would like to thank my collaborators Luca Sanguinetti, Özlem Tuğfe Demir, Andrea de Jesus Torres, and Parisa Ramezani for their contributions.

Episode 31: Analog Modulation and Over-the-Air Aggregation

We have now released the 31st episode of the podcast Wireless Future. It has the following abstract:

A wave of digitalization is sweeping over the world, but not everything benefits from a transformation from analog to digital methods. In this episode, Emil Björnson and Erik G. Larsson discuss the fundamentals of analog modulation techniques to pinpoint their key advantages. Particular attention is given to how analog modulation enables over-the-air aggregation of data, which can be used for computations, efficient federated training of machine learning models, and distributed hypothesis testing. The conversation covers the need for coherent operation and power control and outlines the challenges that researchers are now facing when extending the methods to multi-antenna systems. Towards the end, the following paper is mentioned: “Optimal MIMO Combining for Blind Federated Edge Learning with Gradient Sparsification”.

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

Episode 30: The Sionna Library for Link-Level Simulations (With Jakob Hoydis)

We have now released the 30th episode of the podcast Wireless Future. It has the following abstract:

Many assumptions must be made when simulating a communication link, including the modulation format, channel coding, multi-antenna transmission scheme, receiver processing, and channel modeling. In this episode, Emil Björnson and Erik G. Larsson are visited by Jakob Hoydis, Principal Research Scientist at NVIDIA, to discuss the fundamentals of link-level simulations. Jakob has led the development of the new open-source simulator Sionna, which is particularly well suited for machine learning research. The conversation covers the needs and means for making accurate simulations, channel modeling, reproducibility, and how machine learning can be used to improve standard algorithms. Other topics that are discussed are MIMO decoding and technical debt. Sionna can be downloaded from https://nvlabs.github.io/sionna/ and the white paper that is mentioned in the episode is found here.

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

Episode 29: Six 6G Technologies: The cases for and against 

We have now released the 29th episode of the podcast Wireless Future. It has the following abstract:

The research towards 6G is intense and many new technology components are being proposed by academia and industry. In this episode, Erik G. Larsson and Emil Björnson identify the key selling points of six of these 6G technologies. They discuss the potential for major breakthroughs and what the main challenges are. The episode covers: 1) Semantic communications; 2) Distributed/cell-free Massive MIMO; 3) Reconfigurable intelligent surfaces; 4) Full-duplex radios; 5) Joint communication and sensing; and 6) Orbital Angular Momentum (OAM). The following paper is mentioned: “Is Orbital Angular Momentum (OAM) Based Radio Communication an Unexploited Area?” by Edfors and Johansson. 

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

Cell-Free Massive MIMO and Cloud-RAN

One question that I often receive from reviewers is: What is the difference between Cell-free Massive MIMO and the C-RAN technology? In this blog post, I will provide my usual answer to this question and then elaborate on why reviewers are not always satisfied with this answer.

In a nutshell, a base station consists of an antenna, a radio unit, and a baseband unit (or multiple antennas and radios, in the case of MIMO). These are traditionally co-located as follows: the antenna is deployed in a tower, the baseband unit is placed underneath, and the radio is located at one of these places.

Cloud Radio Access Network (C-RAN)

C-RAN is an alternative deployment architecture where the baseband units are deployed at other locations, “in the cloud” so to say. The C in C-RAN can also stand for “centralized”, which might be a preferable terminology since the word “cloud” is often associated with the use of general-purpose compute hardware. In C-RAN, the baseband processing tasks of many neighboring base stations are carried out in a single central processing unit (CPU), which might use specialized or general-purpose hardware. For latency reasons, the physical distance between a base station site and its CPU shouldn’t be more than a few kilometers. By pooling the processing resources in this way, it is possible to reduce the total hardware expenditure, particularly if some base stations often have low traffic load so the total hardware capability can be reduced by sharing. In other words, one can build a network that can handle high traffic anywhere, as long as it doesn’t happen everywhere at the same time. Another important benefit is that the neighboring base stations can more easily cooperate when their processing is anyway carried out at the same CPU.

In summary, C-RAN is a deployment architecture. Different air interfaces (e.g., 4G, 5G) can be implemented on top of the C-RAN architecture, making use of different physical layer techniques (e.g., Massive MIMO, coordinated multipoint).

Cell-free Massive MIMO

This is a new physical layer technology where the neighboring base stations (called access points in the related literature) are jointly serving the users in their vicinity. By carrying out coherent signal processing, the signal power can be boosted, the interference can be mitigated, and the cell boundaries are alleviated. We are essentially creating a wide-area network that is free from cells. There are different forms of Cell-Free Massive MIMO, characterized by where the baseband processing is carried out. It can either be fully centralized at the CPU or distributed as far as possible to pre-processing units located at each access point. The following video elaborates on these different implementation levels:

My answer

The simple answer to the question posed in the first paragraph is that C-RAN is a network architecture and Cell-free Massive MIMO is a physical-layer technology that can be deployed using either C-RAN or some other architecture. It is not a matter of selecting one or the other, but both can coexist and benefit from each other. My group is presenting a paper at ICC 2022 that exemplifies how to optimize the operation of Cell-free Massive MIMO when it is implemented in C-RAN.

The weak spot with my answer is that C-RAN was proposed already in 2011, based on the needs of the industry to consolidate their network assets, and a large amount of academic research has been carried out since then. Some of the papers on C-RAN have considered physical-layer techniques that resemble Cell-free Massive MIMO but without using that terminology. Some people might rightfully associate C-RAN with cell-free-like processing schemes, because they fit so well together. After all, Cell-free Massive MIMO is a revamp of Network MIMO that makes it more practical.

Making Cell-Free Massive MIMO Competitive

The paper “Making Cell-Free Massive MIMO Competitive with MMSE Processing and Centralized Implementation” that I’ve authored together with Luca Sanguinetti has been awarded the 2022 IEEE Marconi Prize Paper Award in Wireless Communications. This is a great honor that places the paper on the same list as many seminal papers published in the IEEE Transactions on Wireless Communications.

I will take this opportunity to elaborate on five things that I learned while writing the paper. The basic premise is that we analyze the uplink of a system with many distributed access points (APs) that serve a collection of user devices at the same time and frequency. We compared the data rates that can be achieved depending on how deeply the APs are collaborating, from Level 1 (cellular network with no cooperation) to Level 4 (cell-free network with centralized computations based on complete signal knowledge). We also compared maximum ratio (MR) processing of the received signals with local and centralized forms of minimum mean-squared error (MMSE) processing.

I learned the following five things:

  1. MMSE processing always outperforms MR processing. This might seem obvious, since the former scheme can suppress interference, but the really surprising thing was that the performance difference is large even for single-antenna APs that operate distributively. The reason is that MMSE processing provides much more channel hardening.
  2. Distributed MR processing is the worst-case scenario. Many of the early works on cell-free massive MIMO assumed distributed MR processing and focused on developing advanced power control algorithms. We demonstrated that one can achieve better performance with MMSE processing and rudimentary power control; thus, when designing a cell-free system, the choice of processing scheme is of primary importance, while the choice of power control is secondary.
  3. Linear uplink processing is nearly optimal. In a fully centralized implementation, it is possible to implement non-linear processing schemes for signal detection; in particular, successive interference cancellation could be used. We showed that this approach only increases the sum rate by a few percent, which isn’t enough to motivate the increased complexity. The reason is that we seldom have any strong interfering signals, just many weakly interfering signals.
  4. Distributed processing increases fronthaul signaling. Since the received signals are distributed over the APs, it might seem logical that one can reduce the fronthaul signaling by also doing parts of the processing distributively. This is not the case in the intended operating regime of cell-free massive MIMO, where each AP serves more or equally many users than it has antennas. In this case, fewer parameters need to be sent over the fronthaul when making a centralized implementation!
  5. Max-min fairness is a terrible performance goal. While a key motivation behind cell-free massive MIMO is to even out the performance variations in the system, compared to cellular networks, we shouldn’t strive for exact uniformity. To put it differently, the user with the worst channel conditions in the country shouldn’t dictate the performance of everyone else! Several early works on the topic focused on max-min fairness optimization and showed very promising simulation results, but when I attempted to reproduce these results, I noticed that they were obtained by terminating the optimization algorithms long before the max-min fairness solution was found. This indicates that we need a performance goal based on relative fairness (proportional fairness?) instead of the overly strict max-min fairness goal.

Since the paper was written in 2019, I have treated centralized MMSE processing as the golden standard for cell-free massive MIMO. I have continued looking for ways to reduce the fronthaul signaling while making use of distributed computational resources (that likely will be available in practice). I will mention two recent papers in this direction. The first is “MMSE-Optimal Sequential Processing for Cell-Free Massive MIMO With Radio Stripes“, which shows that one can implement centralized MMSE processing in a distributed/sequential manner, if the fronthaul is sequential. The paper “Team MMSE Precoding with Applications to Cell-free Massive MIMO” develops a methodology for dealing with the corresponding downlink problem, which is more challenging due to power and causality constraints.

Finally, let me thank IEEE ComSoc for not only giving us the Marconi Prize Paper Award but also producing the following nice video about the paper: