Category Archives: Technical insights

Do We Need More Bandwidth?

The bit rate (bit/s) is so tightly connected with the bandwidth (Hz) that the computer science community uses these words interchangeably. This makes good sense when considering fixed communication channels (e.g., cables) for which these quantities are proportional with the bandwidth efficiency (bit/s/Hz) being the proportionality constant. However, when dealing with time-varying wireless channels, the spectral efficiency can vary by orders-of-magnitude depending on the propagation conditions (e.g., between cell center and cell edge), which weakens the connection between the rate and bandwidth.

The peak rate of a 4G device can reach 1 Gbit/s and 5G devices are expected to reach 20 Gbit/s. These numbers greatly surpass the need both for the most demanding contemporary use cases, such as the 25 Mbit/s required by 4k video streaming, and for envisioned virtual reality applications that might require a few hundred Mbit/s. One can certainly imagine other futuristic applications that are more demanding, but since there is a limit to how much information the human perception system can process in real-time, these are typically “data shower” situations where a huge dataset must be momentarily transferred to/from a device for later utilization or processing. I think it is fair to say that future networks cannot be built primarily for such niche applications, thus I made the following one-minute video claiming that wireless doesn’t need more bandwidth but higher efficiency, so that we can deliver bit rates close to the current peak rates most of the time instead of under ideal circumstances.

Why are people talking about THz communications?

The spectral bandwidth has increased with every wireless generation so naturally, the same thing will happen in 6G. This is the kind of argument that you might hear from proponents of (sub-)THz communications, which is synonymous with operating at carrier frequencies beyond 100 GHz where huge bandwidths are available for utilization. The main weakness with this argument is that increasing the bandwidth has never been the main goal of wireless development but only a convenient way to increase the data rate.

As the wireless data traffic continues to increase, the main contributing factor will not be that our devices require much higher instantaneous rates when they are active, but that more devices are active more often. Hence, I believe the most important performance metric is the maximum traffic capacity measured in bit/s/km2, which describes the accumulated traffic that the active devices can generate in a given area.

The traffic capacity is determined by three main factors:

  1. The number of spatially multiplexed devices:
  2. The bandwidth efficiency per device; and
  3. The bandwidth.

We can certainly improve this metric by using more bandwidth, but it is not the only way and it mainly helps users that have good channel conditions. The question that researchers need to ask is: What is the preferred way to increase the traffic capacity from a technical, economical, and practical perspective?

I don’t think we have a conclusive answer to this yet, but it is important to remember that even if the laws of nature stay constant, the preferred solution can change with time. A concrete example is the development of processors, for which the main computing performance metric is the floating-point operations per second (FLOPS). Improving this metric used to be synonymous with increasing the clock speed, but this trend has now been replaced with increasing the number of cores and using parallel threads because it leads to more power- and heat-efficient solutions than increasing the clock speed beyond the current range.

The corresponding development in wireless communications would be to stop increasing the bandwidth (which determines the sampling rate of the signals and the clock speed needed for processing) and instead focus on multiplexing many data streams, which take the role of the threads in this analogy, and balancing the bandwidth efficiency between the streams. The following video describes my thoughts on how to develop wireless technology in that direction:

As a final note, as the traffic capacity in wireless networks increase, there will be some point-to-point links that require huge capacity. This is particularly the case between an access point and the core network. These links will eventually require cables or wireless technologies that can handle many Tbit/s and the wireless option will then require THz communications. The points that I make above apply to the wireless links at the edge, between devices and access points, not to the backhaul infrastructure.

Episode 19: Future of Multi-Antenna Technology and Spectrum (With Thomas Marzetta)

Our podcast is back with a second season! The first episode has number 19 and the following abstract:

How far is the capacity of wireless networks from the limits imposed by nature? To seek an answer to this question, Erik G. Larsson and Emil Björnson invited Thomas Marzetta, Distinguished Industry Professor and originator of Massive MIMO, to this first episode of the second season. The conversation covers the history of that technology and the fundamental aspects that will always dictate the capacity of wireless networks: antenna technology, channel state information, spectral efficiency, bandwidth, spectrum bands, and link budgets. To learn more, you can read the article “Massive MIMO is a Reality – What is Next? Five Promising Research Directions for Antenna Arrays”.

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

A Closer Look at Massive MIMO From Ericsson

I was recently invited to the Ericsson Imagine Studio to have a look at the company’s wide range of Massive MIMO products. The latest addition is the AIR 3268 with 32 antenna-integrated radios that only weighs 12 kg. In this article, I will share the new insights that I gained from this visit.

The new AIR 3268 has an impressively low weight.

Ericsson currently has around 10 mid-band Massive MIMO products, which are divided into three categories: capacity, coverage, and compact. The products provide different combinations of:

  • Maximum output power and number of radiating elements, which jointly determines the effective isotropic radiated power;
  • Number of radio branches, which determine the beamforming variability;
  • Maximum bandwidth, which should be matched to the operator’s spectrum assets.

The new lightweight AIR 3268 (that I got the chance to carry myself) belongs to the compact category, since it “only” radiates 200 W over 200 MHz and “only” contains 128 radiating elements, which are connected to 32 radio branches (sometimes referred to has transceiver chains). A radio branch consists of filters, converters, and amplifiers. The radiating elements are organized in a 8 x 8 array, with dual-polarized elements at each location. Bo Göransson, a Senior Expert at Ericsson, told me that the element spacing is roughly 0.5λ in the horizontal dimension and 0.7λ in the vertical dimension. The exact spacing is fine-tuned based on thermal and mechanical aspects, and also varies in the sense that the physical spacing is constant but becomes a different fraction of the wavelength λ depending on the frequency band used.

The reason for having a larger spacing in the vertical dimension is to obtain sharper vertical directivity, so that the radiated energy is more focused down towards the ground. This also explains why the box is rectangular, even if the elements are organized as a 8 x 8 array. Four vertically neighboring elements with equal polarization are connected to the same radio branch, which Ericsson calls a subarray. Each subarray behaves as an antenna with a fixed radiation pattern that is relatively narrow in the vertical domain. This concept can be illustrated as follows:

The mapping between radiating elements and radio branches (antennas) in AIR 3268.

This lightweight product is well suited for Swedish cities, which are characterized by low-rise buildings and operators that each have around 100 MHz of spectrum in the 3.5 GHz band.

If we take the AIR 3268 as a starting point, the coverage range can be improved by increasing the number of radiating elements to 192 and increasing the maximum output power to 320 W. The AIR 3236 in the coverage category has that particular configuration. To further increase the capacity, the number of radio branches can be also increased to 64, as in the AIR 6419 that I wrote about earlier this year. These changes will increase the weight from 12 kg to 20 kg.

Why low weight matters

There are multiple reasons why the weight of a Massive MIMO array matters in practice. Firstly, it eases the deployment since a single engineer can carry it; in fact, there is a 25 kg per-person limit in the industry, which implies that a single engineer may carry one AIR 3268 in each hand (as shown in the press photo from Ericsson). Secondly, the site rent in towers depends on the weight, as well as the wind load, which naturally reduces when the array shrinks in size. All current Ericsson products have front dimensions determined by the antenna array size since all other components are placed behind the radiating elements. This was not the case a few years ago, and demonstrates the product evolution. The thickness of the panel is determined by the radio components as well as the heatsink that is designed to manage ambient temperatures up to 55°C.

The total energy consumption is reduced by 10% in the new product, compared to its predecessor. It is the result of fine-tuning all the analog components. According to Måns Hagström, Senior Radio Systems Architect at Ericsson, there are no “low-hanging fruits” anymore in the hardware design since the Massive MIMO product line is now mature. However, there is a new software feature called Deep Sleep, where power amplifiers and analog components are turned off in low-traffic situations to save power. Turning off components is not as simple as it sounds, since it must be possible to turn them on again in the matter of a millisecond so that coverage and delay issues are not created.

Måns Hagström shows the current Massive MIMO portfolio at Ericsson.

Beamforming implementation

The channel state information needed for beamforming can either be acquired by codebook-based feedback or utilizing uplink-downlink reciprocity in 5G, where the latter is what most of the academic literature focuses on. The beamforming computation in Ericsson’s products is divided between the Massive MIMO panel and the baseband processing unit, which are interconnected using the eCPRI interface. The purpose of the computational split is to reduce the fronthaul signaling by exploiting the fact that Massive MIMO transmits a relatively small number of data streams/layers (e.g., 1-16) using a substantially larger number of radios (e.g., 32 or 64). More precisely, the Ericsson Silicon in the panel is taking care of the mapping from data streams to radio branches, so that the eCPRI interface capacity requirement is independent of the number of radio branches. It is actually the same silicon that is used in all the current Massive MIMO products. I was told that some kind of regularized zero-forcing processing is utilized when computing the multi-user beamforming. Billy Hogan, Principal Engineer at Ericsson, pointed out that the beamforming implementation is flexible in the sense that there are tunable parameters that can be revised through a software upgrade, as the company learns more about how Massive MIMO works in practical deployments.

Hagström also pointed out that a key novelty in 5G is the larger variations in capabilities between handsets, for example, in how many antennas they have, how flexible the antennas are, how they make measurements and decisions on the preferred mode of operation to report back to the base station. The 5G standard specifies protocols but leaves the door open for both clever and less sophisticated implementations. While Massive MIMO has been shown to provide impressive spectral efficiency in field trials, it remains to been seen how large the spectral efficiency gains become in practice, when using commercial handsets and real data traffic. It will likely take several years before the data traffic reaches a point where the capability of spatially multiplexing many users is needed most of the time. In the meantime, these Massive MIMO panels will deliver higher single-user data rates throughout the coverage area than previous base stations, thanks to the stronger and more flexible beamforming.

Bo Göransson, Billy Hogan, and Måns Hagström, Massive MIMO experts at Ericsson

Future development

One of the main take-aways from my visit to the Ericsson Imagine Studio in Stockholm is that the Massive MIMO product development has come much further than I anticipated. Five years ago, when I wrote the book Massive MIMO Networks, I had the vision that we should eventually be able to squeeze all the components into a compact box with a size that matches the antenna array dimensions. But I couldn’t imagine that it would happen already in 2021 when the 5G deployments are still in their infancy. With this in mind, it is challenging to speculate on what will come next. If the industry can already build 64 antenna-integrated radios into a box that weighs less than 20 kg, then one can certainly build even larger arrays, when there will be demand for that.

The only hint about the future that I picked up from my visit is that Ericsson already considers Massive MIMO technology and its evolutions to be natural parts of 6G solutions.

Episode 18: Ever-Present Intelligent 6G Communications (with Magnus Frodigh)

We have now released the 18th episode of the podcast Wireless Future, which is the last one in the first season (we are taking a summer break). The episode has the following abstract:

Many individuals are speculating about 6G, but in this episode, you will hear the joint vision of 700+ researchers at Ericsson. Erik G. Larsson and Emil Björnson are visited by Magnus Frodigh, Vice-President and Head of Ericsson Research. His team has recently published the white paper “Ever-present intelligent communication: A research outlook towards 6G”. The conversation covers emerging applications, new requirements, and research challenges that might define the 6G era. How can we achieve limitless connectivity? Which frequency bands will become important? What is a network compute fabric? What should students learn to take part in the 6G development? These are just some of the questions that are answered.

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

It is All About Multiplexing

Every few months, there is a new press release about how a mobile network operator has collaborated with a network vendor to set a new 5G data speed record. There is no doubt that carrier aggregation between the mid-band and mmWave band can deliver more than 5 Gbps. However, it is less clear what we would actually need such high speeds for. The majority of the data traffic in current networks is consumed by video streaming. Even if you stream a 4k resolution video, the codec doesn’t need more than 25 Mbps! Hence, 5G allows you to download an entire motion picture in a matter of seconds, but that goes against the main principle of video streaming, namely that the video is downloaded at the same pace as it is watched to alleviate the need for intermediate storage (apart from buffering). So what is the point of these high speeds? That is what I will explain in this blog post.

The mobile data traffic is growing by 25-50% per year, but the reason is not that we require higher data rates when using our devices. Instead, the main reason is that we are using our devices more frequently, thus the cellular networks must be evolved to manage the increasing accumulated data rate demand of the active devices.

In other words, our networks must be capable of multiplexing all the devices that want to be active simultaneously in peak hours. As the traffic grows, more devices can be multiplexed per km2 by either deploying more base stations that each can serve a certain number of devices, using more spectrum that can be divided between the devices, or using Massive MIMO technology for spatial multiplexing by beamforming.

The preferred multiplexing solution depends on the deployment cost and various local practicalities (e.g., the shape of the propagation environment and user distribution). For example, the main purpose of the new mmWave spectrum is not to continuously deliver 5 Gbps to a single user, but to share that traffic capacity between the many users in hotspots. If each user requires 25 Mbps, then 200 users can share a 5 Gbps capacity. So far, there are few deployments of that kind since Massive MIMO in the 3.5 GHz band has been deployed in the first 5G networks to deliver multi-gigabit accumulated data rates.

I believe that spatial multiplexing will continue to be the preferred solution in future network generations, while mmWave spectrum will mainly be utilized as a WiFi replacement in hotspots with many users and high service requirements. I am skeptical towards the claims that future networks must operate at higher carrier frequencies (e.g., THz bands); we don’t need more spectrum, we need better multiplexing capabilities and that can be achieved in other ways than taking a wide bandwidth and share it between the users. In the following video, I elaborate more on these things:

Episode 17: Energy-Efficient Communications

We have now released the 17th episode of the podcast Wireless Future, with the following abstract:

The wireless data traffic grows by 50% per year which implies that the energy consumption in the network equipment is also growing steadily. This raises both environmental and economic concerns. In this episode, Erik G. Larsson and Emil Björnson discuss how the wireless infrastructure can be made more energy-efficient. The conversation covers the basic data traffic characteristics and definition of energy efficiency, as well as what can be done when designing future network infrastructure, planning deployments, and developing efficient algorithms. To learn more, they recommend the IEEE 5G and Beyond Technology Roadmap article “Energy Efficiency” and also “Deploying Dense Networks for Maximal Energy Efficiency: Small Cells Meet Massive MIMO”.

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

Episode 16: 6G and the Physical Layer (with Angel Lozano)

We have now released the 16th episode of the podcast Wireless Future, with the following abstract:

The research community’s hype around 5G has quickly shifted to hyping the next big thing: 6G. This raises many questions: Did 5G become as revolutionary as previously claimed? Which physical-layer aspects remain to be improved in 6G? To discuss these things, Erik G. Larsson and Emil Björnson are visited by Professor Angel Lozano, author of the seminal papers “What will 5G be?” and “Is the PHY layer dead?”. The conversation covers the practical and physical limits in communications, the role of machine learning, the relation between academia and industry, and whether we have got lost in asymptotic analysis. Please visit Angel’s website.

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places: