We have now released the 34th episode of the podcast Wireless Future. It has the following abstract:
The speed of wired optical fiber technology is soon reaching 1 million megabits per second, also known as 1 terabit/s. Wireless technology is improving at the same pace but is 10 years behind in speed, thus we can expect to reach 1 terabit/s over wireless during the next decade. In this episode, Erik G. Larsson and Emil Björnson discuss these expected developments with a focus on the potential use cases and how to reach these immense speeds in different frequency bands – from 1 GHz to 200 GHz. Their own thoughts are mixed with insights gathered at a recent workshop at TU Berlin. Major research challenges remain, particularly related to algorithms, transceiver hardware, and decoding complexity.
You can watch the video podcast on YouTube:
You can listen to the audio-only podcast at the following places:
I will take this opportunity to elaborate on five things that I learned while writing the paper. The basic premise is that we analyze the uplink of a system with many distributed access points (APs) that serve a collection of user devices at the same time and frequency. We compared the data rates that can be achieved depending on how deeply the APs are collaborating, from Level 1 (cellular network with no cooperation) to Level 4 (cell-free network with centralized computations based on complete signal knowledge). We also compared maximum ratio (MR) processing of the received signals with local and centralized forms of minimum mean-squared error (MMSE) processing.
I learned the following five things:
MMSE processing always outperforms MR processing. This might seem obvious, since the former scheme can suppress interference, but the really surprising thing was that the performance difference is large even for single-antenna APs that operate distributively. The reason is that MMSE processing provides much more channel hardening.
Distributed MR processing is the worst-casescenario. Many of the early works on cell-free massive MIMO assumed distributed MR processing and focused on developing advanced power control algorithms. We demonstrated that one can achieve better performance with MMSE processing and rudimentary power control; thus, when designing a cell-free system, the choice of processing scheme is of primaryimportance, while the choice of power control is secondary.
Linear uplink processing is nearly optimal. In a fully centralized implementation, it is possible to implement non-linear processing schemes for signal detection; in particular, successive interference cancellation could be used. We showed that this approach only increases the sum rate by a few percent, which isn’t enough to motivate the increased complexity. The reason is that we seldom have any strong interfering signals, just many weakly interfering signals.
Distributed processing increases fronthaul signaling. Since the received signals are distributed over the APs, it might seem logical that one can reduce the fronthaul signaling by also doing parts of the processing distributively. This is not the case in the intended operating regime of cell-free massive MIMO, where each AP serves more or equally many users than it has antennas. In this case, fewer parameters need to be sent over the fronthaul when making a centralized implementation!
Max-min fairness is a terrible performance goal. While a key motivation behind cell-free massive MIMO is to even out the performance variations in the system, compared to cellular networks, we shouldn’t strive for exact uniformity. To put it differently, the user with the worst channel conditions in the country shouldn’t dictate the performance of everyone else! Several early works on the topic focused on max-min fairness optimization and showed very promising simulation results, but when I attempted to reproduce these results, I noticed that they were obtained by terminating the optimization algorithms long before the max-min fairness solution was found. This indicates that we need a performance goal based on relative fairness (proportional fairness?) instead of the overly strict max-min fairness goal.
Since the paper was written in 2019, I have treated centralized MMSE processing as the golden standard for cell-free massive MIMO. I have continued looking for ways to reduce the fronthaul signaling while making use of distributed computational resources (that likely will be available in practice). I will mention two recent papers in this direction. The first is “MMSE-Optimal Sequential Processing for Cell-Free Massive MIMO With Radio Stripes“, which shows that one can implement centralized MMSE processing in a distributed/sequential manner, if the fronthaul is sequential. The paper “Team MMSE Precoding with Applications to Cell-free Massive MIMO” develops a methodology for dealing with the corresponding downlink problem, which is more challenging due to power and causality constraints.
Finally, let me thank IEEE ComSoc for not only giving us the Marconi Prize Paper Award but also producing the following nice video about the paper:
I was recently invited to the Ericsson Imagine Studio to have a look at the company’s wide range of Massive MIMO products. The latest addition is the AIR 3268 with 32 antenna-integrated radios that only weighs 12 kg. In this article, I will share the new insights that I gained from this visit.
Ericsson currently has around 10 mid-band Massive MIMO products, which are divided into three categories: capacity, coverage, and compact. The products provide different combinations of:
Number of radio branches, which determine the beamforming variability;
Maximum bandwidth, which should be matched to the operator’s spectrum assets.
The new lightweight AIR 3268 (that I got the chance to carry myself) belongs to the compact category, since it “only” radiates 200 W over 200 MHz and “only” contains 128 radiating elements, which are connected to 32 radio branches (sometimes referred to has transceiver chains). A radio branch consists of filters, converters, and amplifiers. The radiating elements are organized in a 8 x 8 array, with dual-polarized elements at each location. Bo Göransson, a Senior Expert at Ericsson, told me that the element spacing is roughly 0.5λ in the horizontal dimension and 0.7λ in the vertical dimension. The exact spacing is fine-tuned based on thermal and mechanical aspects, and also varies in the sense that the physical spacing is constant but becomes a different fraction of the wavelength λ depending on the frequency band used.
The reason for having a larger spacing in the vertical dimension is to obtain sharper vertical directivity, so that the radiated energy is more focused down towards the ground. This also explains why the box is rectangular, even if the elements are organized as a 8 x 8 array. Four vertically neighboring elements with equal polarization are connected to the same radio branch, which Ericsson calls a subarray. Each subarray behaves as an antenna with a fixed radiation pattern that is relatively narrow in the vertical domain. This concept can be illustrated as follows:
This lightweight product is well suited for Swedish cities, which are characterized by low-rise buildings and operators that each have around 100 MHz of spectrum in the 3.5 GHz band.
If we take the AIR 3268 as a starting point, the coverage range can be improved by increasing the number of radiating elements to 192 and increasing the maximum output power to 320 W. The AIR 3236 in the coverage category has that particular configuration. To further increase the capacity, the number of radio branches can be also increased to 64, as in the AIR 6419 that I wrote about earlier this year. These changes will increase the weight from 12 kg to 20 kg.
Why low weight matters
There are multiple reasons why the weight of a Massive MIMO array matters in practice. Firstly, it eases the deployment since a single engineer can carry it; in fact, there is a 25 kg per-person limit in the industry, which implies that a single engineer may carry one AIR 3268 in each hand (as shown in the press photo from Ericsson). Secondly, the site rent in towers depends on the weight, as well as the wind load, which naturally reduces when the array shrinks in size. All current Ericsson products have front dimensions determined by the antenna array size since all other components are placed behind the radiating elements. This was not the case a few years ago, and demonstrates the product evolution. The thickness of the panel is determined by the radio components as well as the heatsink that is designed to manage ambient temperatures up to 55°C.
The total energy consumption is reduced by 10% in the new product, compared to its predecessor. It is the result of fine-tuning all the analog components. According to Måns Hagström, Senior Radio Systems Architect at Ericsson, there are no “low-hanging fruits” anymore in the hardware design since the Massive MIMO product line is now mature. However, there is a new software feature called Deep Sleep, where power amplifiers and analog components are turned off in low-traffic situations to save power. Turning off components is not as simple as it sounds, since it must be possible to turn them on again in the matter of a millisecond so that coverage and delay issues are not created.
The channel state information needed for beamforming can either be acquired by codebook-based feedback or utilizing uplink-downlink reciprocity in 5G, where the latter is what most of the academic literature focuses on. The beamforming computation in Ericsson’s products is divided between the Massive MIMO panel and the baseband processing unit, which are interconnected using the eCPRI interface. The purpose of the computational split is to reduce the fronthaul signaling by exploiting the fact that Massive MIMO transmits a relatively small number of data streams/layers (e.g., 1-16) using a substantially larger number of radios (e.g., 32 or 64). More precisely, the Ericsson Silicon in the panel is taking care of the mapping from data streams to radio branches, so that the eCPRI interface capacity requirement is independent of the number of radio branches. It is actually the same silicon that is used in all the current Massive MIMO products. I was told that some kind of regularized zero-forcing processing is utilized when computing the multi-user beamforming. Billy Hogan, Principal Engineer at Ericsson, pointed out that the beamforming implementation is flexible in the sense that there are tunable parameters that can be revised through a software upgrade, as the company learns more about how Massive MIMO works in practical deployments.
Hagström also pointed out that a key novelty in 5G is the larger variations in capabilities between handsets, for example, in how many antennas they have, how flexible the antennas are, how they make measurements and decisions on the preferred mode of operation to report back to the base station. The 5G standard specifies protocols but leaves the door open for both clever and less sophisticated implementations. While Massive MIMO has been shown to provide impressive spectral efficiency in field trials, it remains to been seen how large the spectral efficiency gains become in practice, when using commercial handsets and real data traffic. It will likely take several years before the data traffic reaches a point where the capability of spatially multiplexing many users is needed most of the time. In the meantime, these Massive MIMO panels will deliver higher single-user data rates throughout the coverage area than previous base stations, thanks to the stronger and more flexible beamforming.
One of the main take-aways from my visit to the Ericsson Imagine Studio in Stockholm is that the Massive MIMO product development has come much further than I anticipated. Five years ago, when I wrote the book Massive MIMO Networks, I had the vision that we should eventually be able to squeeze all the components into a compact box with a size that matches the antenna array dimensions. But I couldn’t imagine that it would happen already in 2021 when the 5G deployments are still in their infancy. With this in mind, it is challenging to speculate on what will come next. If the industry can already build 64 antenna-integrated radios into a box that weighs less than 20 kg, then one can certainly build even larger arrays, when there will be demand for that.
The only hint about the future that I picked up from my visit is that Ericsson already considers Massive MIMO technology and its evolutions to be natural parts of 6G solutions.
The name “Massive MIMO” has been debated since its inception. Tom Marzetta introduced it ten years ago as one of several potential names for his envisioned MIMO technology with a very large number of antennas. Different researchers used different terminologies in their papers during the first years of research on the topic, but the community eventually converged to calling it Massive MIMO.
The apparent issue with that terminology is that the adjective “massive” can have different meanings. The first definition in the Merriam-Webster dictionary is “consisting of a large mass”, in the sense of being “bulky” and “heavy”. The second definition is “large in scope or degree”, in the sense of being “large in comparison to what is typical”.
It is probably the second definition that Marzetta had in mind when introducing the name “Massive MIMO”; that is, a MIMO technology with a number of antennas that is large in comparison to what was typically considered in the 4G era. Yet, there has been a perception in the industry that one cannot build a base station with many antennas without it also being bulky and heavy (i.e., the first definition).
Massive MIMO products are not heavy anymore
Ericsson and Huawei have recently proved that this perception is wrong. The Ericsson AIR 6419 that was announced in February (to be released later this year) contains 64 antenna-integrated radios in a box that is roughly 1 x 0.5 m, with a weight of only 20 kg. This can be compared with Ericsson’s first Massive MIMO product from 2018, which weighed 60 kg. The product is designed for the 3.5 GHz band, supports 200 MHz of bandwidth, and 320 W of output power. The box contains an application-specific integrated circuit (ASIC) that handles parts of the baseband processing. Huawei introduced a similar product in February that weighs 19 kg and supports 400 MHz of spectrum, but there are fewer details available regarding it.
These products seem very much in line with what Massive MIMO researchers like me have been imagining when writing scientific papers. It is impressive to see how quickly this vision has turned into reality, and how 5G has become synonymous with Massive MIMO deployments in sub-6 GHz bands, despite all the fuss about small cells with mmWave spectrum. While both technologies can be used to support higher traffic loads, it is clear that spatial multiplexing has now become the primary solution adopted by network operators in the 5G era.
Open RAN enabled Massive MIMO
While the new Ericsson and Huawei products demonstrate how a tight integration of antennas, radios, and baseband processing enables compact, low-weight Massive MIMO implementation, there is also an opposite trend. Mavenir and Xilinx have teamed up to build a Massive MIMO solution that builds on the Open RAN principle of decoupling hardware and software (so that the operator can buy these from different vendors). They claim that their first 64-antenna product, which combines Xilinx’s radio hardware with Mavenir’s cloud-computing platform, will be available by the end of this year. The drawback with the hardware-software decoupling is the higher energy consumption caused by increased fronthaul signaling (when all processing is done “in the cloud”) and the use of field-programmable gate arrays (FPGAs) instead of ASICs (since a higher level of flexibility is needed in the processing units when these are not co-designed with the radios).
Since the 5G technology is still in its infancy, it will be exciting to see how it evolves over the coming years. I believe we will see even larger antenna numbers in the 3.5 GHz band, new array form factors, products that integrate many frequency bands in the same box, digital beamforming in mmWave bands, and new types of distributed antenna deployments. The impact of Massive MIMO will be massive, even if the weight isn’t massive.
Mobile networks are divided into semi-autonomous cells. It is essentially a divide-and-conquer approach to network operation, where each cell becomes simple to operate and the reuse of radio resources over the cells can be planned in advance. This network structure was proposed already in the 1950s and has been vital for the wide-spread adoption of mobile network technology. However, the weaknesses of the cellular architecture have become increasingly apparent as mobile data has replaced voice calls as the main type of traffic. While the peak data rates are high in contemporary networks, the user-guaranteed rates are very modest, due to the largest pathloss variations and inter-cell interference that is inherent in the cellular architecture.
A promising solution to these issues is to leave the cellular paradigm behind and create a new network architecture that is free from cells. This vision is called Cell-free Massive MIMO.
This is a technology that essentially combines three main components that have been previously considered separately: 1) the efficient physical-layer operation with many antennas that enabled wide-spread adoption of Massive MIMO in cellular networks; 2) the vision of deploying many access points close to the users, to create a reality where users are surrounded by access points instead of the opposite; 3) the joint transmission and reception from distributed access points, that have been analyzed under many names over the last two decades, including coordinated multipoint (CoMP).
This blog post is about the first book on the topic: “Foundations of User-Centric Cell-Free Massive MIMO” by Özlem Tuğfe Demir, Emil Björnson, and Luca Sanguinetti. We provide the historical background, theoretical foundations, and state-of-the-art signal processing algorithms. The book is 300 pages long and is accompanied by a GitHub repository with all the simulation code. We hope that this book will serve as the starting point for much further research. The last section of the book outlines many future research directions.
NOW publishers is offering a free PDF until April 2, 2021. To obtain it, go to the book’s website, create a free account, and then click on download. For the same period, they are offering printed copies for the special prize of $40 (including non-trackable shipping). To purchase the printed version, go to the secure Order Form and use the Promotion Code 584793.
Since 5G is designed to be future-proof and enable decoupling of the control signaling and data transmissions, I believe that the 5G networks will become increasingly cell-free during this decade, while beyond 5G networks will embrace the cell-free architecture from the outset.
A new EU-funded 6G initiative, the REINDEER project, joins forces from academia and industry to develop and build a new type of multi-antenna-based smart connectivity platform integral to future 6G systems. From Ericsson’s new site:
The project’s name is derived from REsilient INteractive applications through hyper Diversity in Energy-Efficient RadioWeaves technology, and the development of “RadioWeaves” technology will be a key deliverable of the project. This new wireless access infrastructure consists of a fabric of distributed radio, compute and storage. It will advance the ideas of large-scale intelligent surfaces and cell-free wireless access to offer capabilities far beyond future 5G networks. This is expected to offer capacity scalable to quasi-infinite, and perceived zero latency and interaction with a large number of embedded devices.