Category Archives: News

Episode 36: 6G from an Operator Perspective

We have now released the 36th episode of the podcast Wireless Future. It has the following abstract:

It is easy to get carried away by futuristic 6G visions, but what matters in the end is what technology and services the telecom operators will deploy. In this episode, Erik G. Larsson and Emil Björnson discuss a new white paper from SK Telecom that describes the lessons learned from 5G and how these experiences can be utilized to make 6G more successful. The paper and conversation cover network evolution, commercial use cases, virtualization, artificial intelligence, and frequency spectrum. The latest developments in defining official 6G requirements are also discussed. The white paper can be found here. The following news article about mmWave licenses is mentioned. The IMT-2030 Framework for 6G can be found here.

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

25 Years of Signal Processing Advances for Multiantenna Communications

Multiantenna communications have a long and winding history, starting with how Guglielmo Marconi used an array of phase-aligned antennas to communicate over the Atlantic and Karl Ferdinand Braun used a triangular array to transmit phase-shifted signal copies to beamform in a controlled direction. The use of antenna arrays for spatial diversity and multiplexing has since appeared. The cellular network pioneer Martin Cooper tried to launch multi-user MIMO in the 1990s but concluded in 1996 that “computers weren’t powerful enough to operate it”.

During the last 25 years, multiantenna communications have changed from being a technology only used for beamforming and diversity, to becoming a mainstream enabler of high-capacity communication in 5G. It is used for both single-user and multi-user MIMO when connecting any modern mobile phone to the Internet, in both the 3 GHz and mmWave bands.

The IEEE Signal Processing Society is celebrating its 75 years anniversary and, therefore, the Signal Processing Magazine publishes a special issue focusing on the last 25 years of research developments. I have written a paper for this issue called “25 Years of Signal Processing Advances for Multiantenna Communications“. It is now available on arXiv, and it is co-authored by Yonina Eldar, Erik G. Larsson, Angel Lozano, and H. Vincent Poor. I hope you will like it!

Episode 34: How to Achieve 1 Terabit/s over Wireless?

We have now released the 34th episode of the podcast Wireless Future. It has the following abstract:

The speed of wired optical fiber technology is soon reaching 1 million megabits per second, also known as 1 terabit/s. Wireless technology is improving at the same pace but is 10 years behind in speed, thus we can expect to reach 1 terabit/s over wireless during the next decade. In this episode, Erik G. Larsson and Emil Björnson discuss these expected developments with a focus on the potential use cases and how to reach these immense speeds in different frequency bands – from 1 GHz to 200 GHz. Their own thoughts are mixed with insights gathered at a recent workshop at TU Berlin. Major research challenges remain, particularly related to algorithms, transceiver hardware, and decoding complexity.

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

Making Cell-Free Massive MIMO Competitive

The paper “Making Cell-Free Massive MIMO Competitive with MMSE Processing and Centralized Implementation” that I’ve authored together with Luca Sanguinetti has been awarded the 2022 IEEE Marconi Prize Paper Award in Wireless Communications. This is a great honor that places the paper on the same list as many seminal papers published in the IEEE Transactions on Wireless Communications.

I will take this opportunity to elaborate on five things that I learned while writing the paper. The basic premise is that we analyze the uplink of a system with many distributed access points (APs) that serve a collection of user devices at the same time and frequency. We compared the data rates that can be achieved depending on how deeply the APs are collaborating, from Level 1 (cellular network with no cooperation) to Level 4 (cell-free network with centralized computations based on complete signal knowledge). We also compared maximum ratio (MR) processing of the received signals with local and centralized forms of minimum mean-squared error (MMSE) processing.

I learned the following five things:

  1. MMSE processing always outperforms MR processing. This might seem obvious, since the former scheme can suppress interference, but the really surprising thing was that the performance difference is large even for single-antenna APs that operate distributively. The reason is that MMSE processing provides much more channel hardening.
  2. Distributed MR processing is the worst-case scenario. Many of the early works on cell-free massive MIMO assumed distributed MR processing and focused on developing advanced power control algorithms. We demonstrated that one can achieve better performance with MMSE processing and rudimentary power control; thus, when designing a cell-free system, the choice of processing scheme is of primary importance, while the choice of power control is secondary.
  3. Linear uplink processing is nearly optimal. In a fully centralized implementation, it is possible to implement non-linear processing schemes for signal detection; in particular, successive interference cancellation could be used. We showed that this approach only increases the sum rate by a few percent, which isn’t enough to motivate the increased complexity. The reason is that we seldom have any strong interfering signals, just many weakly interfering signals.
  4. Distributed processing increases fronthaul signaling. Since the received signals are distributed over the APs, it might seem logical that one can reduce the fronthaul signaling by also doing parts of the processing distributively. This is not the case in the intended operating regime of cell-free massive MIMO, where each AP serves more or equally many users than it has antennas. In this case, fewer parameters need to be sent over the fronthaul when making a centralized implementation!
  5. Max-min fairness is a terrible performance goal. While a key motivation behind cell-free massive MIMO is to even out the performance variations in the system, compared to cellular networks, we shouldn’t strive for exact uniformity. To put it differently, the user with the worst channel conditions in the country shouldn’t dictate the performance of everyone else! Several early works on the topic focused on max-min fairness optimization and showed very promising simulation results, but when I attempted to reproduce these results, I noticed that they were obtained by terminating the optimization algorithms long before the max-min fairness solution was found. This indicates that we need a performance goal based on relative fairness (proportional fairness?) instead of the overly strict max-min fairness goal.

Since the paper was written in 2019, I have treated centralized MMSE processing as the golden standard for cell-free massive MIMO. I have continued looking for ways to reduce the fronthaul signaling while making use of distributed computational resources (that likely will be available in practice). I will mention two recent papers in this direction. The first is “MMSE-Optimal Sequential Processing for Cell-Free Massive MIMO With Radio Stripes“, which shows that one can implement centralized MMSE processing in a distributed/sequential manner, if the fronthaul is sequential. The paper “Team MMSE Precoding with Applications to Cell-free Massive MIMO” develops a methodology for dealing with the corresponding downlink problem, which is more challenging due to power and causality constraints.

Finally, let me thank IEEE ComSoc for not only giving us the Marconi Prize Paper Award but also producing the following nice video about the paper:

A Closer Look at Massive MIMO From Ericsson

I was recently invited to the Ericsson Imagine Studio to have a look at the company’s wide range of Massive MIMO products. The latest addition is the AIR 3268 with 32 antenna-integrated radios that only weighs 12 kg. In this article, I will share the new insights that I gained from this visit.

The new AIR 3268 has an impressively low weight.

Ericsson currently has around 10 mid-band Massive MIMO products, which are divided into three categories: capacity, coverage, and compact. The products provide different combinations of:

  • Maximum output power and number of radiating elements, which jointly determines the effective isotropic radiated power;
  • Number of radio branches, which determine the beamforming variability;
  • Maximum bandwidth, which should be matched to the operator’s spectrum assets.

The new lightweight AIR 3268 (that I got the chance to carry myself) belongs to the compact category, since it “only” radiates 200 W over 200 MHz and “only” contains 128 radiating elements, which are connected to 32 radio branches (sometimes referred to has transceiver chains). A radio branch consists of filters, converters, and amplifiers. The radiating elements are organized in a 8 x 8 array, with dual-polarized elements at each location. Bo Göransson, a Senior Expert at Ericsson, told me that the element spacing is roughly 0.5λ in the horizontal dimension and 0.7λ in the vertical dimension. The exact spacing is fine-tuned based on thermal and mechanical aspects, and also varies in the sense that the physical spacing is constant but becomes a different fraction of the wavelength λ depending on the frequency band used.

The reason for having a larger spacing in the vertical dimension is to obtain sharper vertical directivity, so that the radiated energy is more focused down towards the ground. This also explains why the box is rectangular, even if the elements are organized as a 8 x 8 array. Four vertically neighboring elements with equal polarization are connected to the same radio branch, which Ericsson calls a subarray. Each subarray behaves as an antenna with a fixed radiation pattern that is relatively narrow in the vertical domain. This concept can be illustrated as follows:

The mapping between radiating elements and radio branches (antennas) in AIR 3268.

This lightweight product is well suited for Swedish cities, which are characterized by low-rise buildings and operators that each have around 100 MHz of spectrum in the 3.5 GHz band.

If we take the AIR 3268 as a starting point, the coverage range can be improved by increasing the number of radiating elements to 192 and increasing the maximum output power to 320 W. The AIR 3236 in the coverage category has that particular configuration. To further increase the capacity, the number of radio branches can be also increased to 64, as in the AIR 6419 that I wrote about earlier this year. These changes will increase the weight from 12 kg to 20 kg.

Why low weight matters

There are multiple reasons why the weight of a Massive MIMO array matters in practice. Firstly, it eases the deployment since a single engineer can carry it; in fact, there is a 25 kg per-person limit in the industry, which implies that a single engineer may carry one AIR 3268 in each hand (as shown in the press photo from Ericsson). Secondly, the site rent in towers depends on the weight, as well as the wind load, which naturally reduces when the array shrinks in size. All current Ericsson products have front dimensions determined by the antenna array size since all other components are placed behind the radiating elements. This was not the case a few years ago, and demonstrates the product evolution. The thickness of the panel is determined by the radio components as well as the heatsink that is designed to manage ambient temperatures up to 55°C.

The total energy consumption is reduced by 10% in the new product, compared to its predecessor. It is the result of fine-tuning all the analog components. According to Måns Hagström, Senior Radio Systems Architect at Ericsson, there are no “low-hanging fruits” anymore in the hardware design since the Massive MIMO product line is now mature. However, there is a new software feature called Deep Sleep, where power amplifiers and analog components are turned off in low-traffic situations to save power. Turning off components is not as simple as it sounds, since it must be possible to turn them on again in the matter of a millisecond so that coverage and delay issues are not created.

Måns Hagström shows the current Massive MIMO portfolio at Ericsson.

Beamforming implementation

The channel state information needed for beamforming can either be acquired by codebook-based feedback or utilizing uplink-downlink reciprocity in 5G, where the latter is what most of the academic literature focuses on. The beamforming computation in Ericsson’s products is divided between the Massive MIMO panel and the baseband processing unit, which are interconnected using the eCPRI interface. The purpose of the computational split is to reduce the fronthaul signaling by exploiting the fact that Massive MIMO transmits a relatively small number of data streams/layers (e.g., 1-16) using a substantially larger number of radios (e.g., 32 or 64). More precisely, the Ericsson Silicon in the panel is taking care of the mapping from data streams to radio branches, so that the eCPRI interface capacity requirement is independent of the number of radio branches. It is actually the same silicon that is used in all the current Massive MIMO products. I was told that some kind of regularized zero-forcing processing is utilized when computing the multi-user beamforming. Billy Hogan, Principal Engineer at Ericsson, pointed out that the beamforming implementation is flexible in the sense that there are tunable parameters that can be revised through a software upgrade, as the company learns more about how Massive MIMO works in practical deployments.

Hagström also pointed out that a key novelty in 5G is the larger variations in capabilities between handsets, for example, in how many antennas they have, how flexible the antennas are, how they make measurements and decisions on the preferred mode of operation to report back to the base station. The 5G standard specifies protocols but leaves the door open for both clever and less sophisticated implementations. While Massive MIMO has been shown to provide impressive spectral efficiency in field trials, it remains to been seen how large the spectral efficiency gains become in practice, when using commercial handsets and real data traffic. It will likely take several years before the data traffic reaches a point where the capability of spatially multiplexing many users is needed most of the time. In the meantime, these Massive MIMO panels will deliver higher single-user data rates throughout the coverage area than previous base stations, thanks to the stronger and more flexible beamforming.

Bo Göransson, Billy Hogan, and Måns Hagström, Massive MIMO experts at Ericsson

Future development

One of the main take-aways from my visit to the Ericsson Imagine Studio in Stockholm is that the Massive MIMO product development has come much further than I anticipated. Five years ago, when I wrote the book Massive MIMO Networks, I had the vision that we should eventually be able to squeeze all the components into a compact box with a size that matches the antenna array dimensions. But I couldn’t imagine that it would happen already in 2021 when the 5G deployments are still in their infancy. With this in mind, it is challenging to speculate on what will come next. If the industry can already build 64 antenna-integrated radios into a box that weighs less than 20 kg, then one can certainly build even larger arrays, when there will be demand for that.

The only hint about the future that I picked up from my visit is that Ericsson already considers Massive MIMO technology and its evolutions to be natural parts of 6G solutions.

IEEE Globecom Workshop on Wireless Communications for Distributed Intelligence

The 2021 IEEE GLOBECOM workshop on “Wireless Communications for Distributed Intelligence” will be held in Madrid, Spain, in December 2021. This workshop aims at investigating and re-defining the roles of wireless communications for decentralized Artificial Intelligence (AI) systems, including distributed sensing, information processing, automatic control, learning and inference.

We invite submissions of original works on the related topics, which include but are not limited to the following:

  • Network architecture and protocol design for AI-enabled 6G
  • Federated learning (FL) in wireless networks
  • Multi-agent reinforcement learning in wireless networks
  • Communication efficiency in distributed machine learning (ML)
  • Energy efficiency in distributed ML
  • Cross-layer (PHY, MAC, network layer) design for distributed ML
  • Wireless resource allocation for distributed ML
  • Signal processing for distributed ML
  • Over-the-air (OTA) computation for FL
  • Emerging PHY technologies for OTA FL
  • Privacy and security issues of distributed ML
  • Adversary-resilient distributed sensing, learning, and inference
  • Fault tolerance in distributed stochastic gradient descent (DSGD) systems
  • Fault tolerance in multi-agent systems
  • Fundamental limits of distributed ML with imperfect communication

Massive MIMO Becomes Less Massive and More Open

The name “Massive MIMO” has been debated since its inception. Tom Marzetta introduced it ten years ago as one of several potential names for his envisioned MIMO technology with a very large number of antennas. Different researchers used different terminologies in their papers during the first years of research on the topic, but the community eventually converged to calling it Massive MIMO.

The apparent issue with that terminology is that the adjective “massive” can have different meanings. The first definition in the Merriam-Webster dictionary is “consisting of a large mass”, in the sense of being “bulky” and “heavy”. The second definition is “large in scope or degree”, in the sense of being “large in comparison to what is typical”.

It is probably the second definition that Marzetta had in mind when introducing the name “Massive MIMO”; that is, a MIMO technology with a number of antennas that is large in comparison to what was typically considered in the 4G era. Yet, there has been a perception in the industry that one cannot build a base station with many antennas without it also being bulky and heavy (i.e., the first definition).

Massive MIMO products are not heavy anymore

Ericsson and Huawei have recently proved that this perception is wrong. The Ericsson AIR 6419 that was announced in February (to be released later this year) contains 64 antenna-integrated radios in a box that is roughly 1 x 0.5 m, with a weight of only 20 kg. This can be compared with Ericsson’s first Massive MIMO product from 2018, which weighed 60 kg. The product is designed for the 3.5 GHz band, supports 200 MHz of bandwidth, and 320 W of output power. The box contains an application-specific integrated circuit (ASIC) that handles parts of the baseband processing. Huawei introduced a similar product in February that weighs 19 kg and supports 400 MHz of spectrum, but there are fewer details available regarding it.

The new Ericsson AIR 6419 only weighs 20 kg, thus it can be deployed by a single person. (Photo from Ericsson.)

These products seem very much in line with what Massive MIMO researchers like me have been imagining when writing scientific papers. It is impressive to see how quickly this vision has turned into reality, and how 5G has become synonymous with Massive MIMO deployments in sub-6 GHz bands, despite all the fuss about small cells with mmWave spectrum. While both technologies can be used to support higher traffic loads, it is clear that spatial multiplexing has now become the primary solution adopted by network operators in the 5G era.

Open RAN enabled Massive MIMO

While the new Ericsson and Huawei products demonstrate how a tight integration of antennas, radios, and baseband processing enables compact, low-weight Massive MIMO implementation, there is also an opposite trend. Mavenir and Xilinx have teamed up to build a Massive MIMO solution that builds on the Open RAN principle of decoupling hardware and software (so that the operator can buy these from different vendors). They claim that their first 64-antenna product, which combines Xilinx’s radio hardware with Mavenir’s cloud-computing platform, will be available by the end of this year. The drawback with the hardware-software decoupling is the higher energy consumption caused by increased fronthaul signaling (when all processing is done “in the cloud”) and the use of field-programmable gate arrays (FPGAs) instead of ASICs (since a higher level of flexibility is needed in the processing units when these are not co-designed with the radios).

Since the 5G technology is still in its infancy, it will be exciting to see how it evolves over the coming years. I believe we will see even larger antenna numbers in the 3.5 GHz band, new array form factors, products that integrate many frequency bands in the same box, digital beamforming in mmWave bands, and new types of distributed antenna deployments. The impact of Massive MIMO will be massive, even if the weight isn’t massive.