Our podcast is back with a second season! The first episode has number 19 and the following abstract:
How far is the capacity of wireless networks from the limits imposed by nature? To seek an answer to this question, Erik G. Larsson and Emil Björnson invited Thomas Marzetta, Distinguished Industry Professor and originator of Massive MIMO, to this first episode of the second season. The conversation covers the history of that technology and the fundamental aspects that will always dictate the capacity of wireless networks: antenna technology, channel state information, spectral efficiency, bandwidth, spectrum bands, and link budgets. To learn more, you can read the article “Massive MIMO is a Reality – What is Next? Five Promising Research Directions for Antenna Arrays”.
You can watch the video podcast on YouTube:
You can listen to the audio-only podcast at the following places:
I was recently invited to the Ericsson Imagine Studio to have a look at the company’s wide range of Massive MIMO products. The latest addition is the AIR 3268 with 32 antenna-integrated radios that only weighs 12 kg. In this article, I will share the new insights that I gained from this visit.
Ericsson currently has around 10 mid-band Massive MIMO products, which are divided into three categories: capacity, coverage, and compact. The products provide different combinations of:
Number of radio branches, which determine the beamforming variability;
Maximum bandwidth, which should be matched to the operator’s spectrum assets.
The new lightweight AIR 3268 (that I got the chance to carry myself) belongs to the compact category, since it “only” radiates 200 W over 200 MHz and “only” contains 128 radiating elements, which are connected to 32 radio branches (sometimes referred to has transceiver chains). A radio branch consists of filters, converters, and amplifiers. The radiating elements are organized in a 8 x 8 array, with dual-polarized elements at each location. Bo Göransson, a Senior Expert at Ericsson, told me that the element spacing is roughly 0.5λ in the horizontal dimension and 0.7λ in the vertical dimension. The exact spacing is fine-tuned based on thermal and mechanical aspects, and also varies in the sense that the physical spacing is constant but becomes a different fraction of the wavelength λ depending on the frequency band used.
The reason for having a larger spacing in the vertical dimension is to obtain sharper vertical directivity, so that the radiated energy is more focused down towards the ground. This also explains why the box is rectangular, even if the elements are organized as a 8 x 8 array. Four vertically neighboring elements with equal polarization are connected to the same radio branch, which Ericsson calls a subarray. Each subarray behaves as an antenna with a fixed radiation pattern that is relatively narrow in the vertical domain. This concept can be illustrated as follows:
This lightweight product is well suited for Swedish cities, which are characterized by low-rise buildings and operators that each have around 100 MHz of spectrum in the 3.5 GHz band.
If we take the AIR 3268 as a starting point, the coverage range can be improved by increasing the number of radiating elements to 192 and increasing the maximum output power to 320 W. The AIR 3236 in the coverage category has that particular configuration. To further increase the capacity, the number of radio branches can be also increased to 64, as in the AIR 6419 that I wrote about earlier this year. These changes will increase the weight from 12 kg to 20 kg.
Why low weight matters
There are multiple reasons why the weight of a Massive MIMO array matters in practice. Firstly, it eases the deployment since a single engineer can carry it; in fact, there is a 25 kg per-person limit in the industry, which implies that a single engineer may carry one AIR 3268 in each hand (as shown in the press photo from Ericsson). Secondly, the site rent in towers depends on the weight, as well as the wind load, which naturally reduces when the array shrinks in size. All current Ericsson products have front dimensions determined by the antenna array size since all other components are placed behind the radiating elements. This was not the case a few years ago, and demonstrates the product evolution. The thickness of the panel is determined by the radio components as well as the heatsink that is designed to manage ambient temperatures up to 55°C.
The total energy consumption is reduced by 10% in the new product, compared to its predecessor. It is the result of fine-tuning all the analog components. According to Måns Hagström, Senior Radio Systems Architect at Ericsson, there are no “low-hanging fruits” anymore in the hardware design since the Massive MIMO product line is now mature. However, there is a new software feature called Deep Sleep, where power amplifiers and analog components are turned off in low-traffic situations to save power. Turning off components is not as simple as it sounds, since it must be possible to turn them on again in the matter of a millisecond so that coverage and delay issues are not created.
Beamforming implementation
The channel state information needed for beamforming can either be acquired by codebook-based feedback or utilizing uplink-downlink reciprocity in 5G, where the latter is what most of the academic literature focuses on. The beamforming computation in Ericsson’s products is divided between the Massive MIMO panel and the baseband processing unit, which are interconnected using the eCPRI interface. The purpose of the computational split is to reduce the fronthaul signaling by exploiting the fact that Massive MIMO transmits a relatively small number of data streams/layers (e.g., 1-16) using a substantially larger number of radios (e.g., 32 or 64). More precisely, the Ericsson Silicon in the panel is taking care of the mapping from data streams to radio branches, so that the eCPRI interface capacity requirement is independent of the number of radio branches. It is actually the same silicon that is used in all the current Massive MIMO products. I was told that some kind of regularized zero-forcing processing is utilized when computing the multi-user beamforming. Billy Hogan, Principal Engineer at Ericsson, pointed out that the beamforming implementation is flexible in the sense that there are tunable parameters that can be revised through a software upgrade, as the company learns more about how Massive MIMO works in practical deployments.
Hagström also pointed out that a key novelty in 5G is the larger variations in capabilities between handsets, for example, in how many antennas they have, how flexible the antennas are, how they make measurements and decisions on the preferred mode of operation to report back to the base station. The 5G standard specifies protocols but leaves the door open for both clever and less sophisticated implementations. While Massive MIMO has been shown to provide impressive spectral efficiency in field trials, it remains to been seen how large the spectral efficiency gains become in practice, when using commercial handsets and real data traffic. It will likely take several years before the data traffic reaches a point where the capability of spatially multiplexing many users is needed most of the time. In the meantime, these Massive MIMO panels will deliver higher single-user data rates throughout the coverage area than previous base stations, thanks to the stronger and more flexible beamforming.
Future development
One of the main take-aways from my visit to the Ericsson Imagine Studio in Stockholm is that the Massive MIMO product development has come much further than I anticipated. Five years ago, when I wrote the book Massive MIMO Networks, I had the vision that we should eventually be able to squeeze all the components into a compact box with a size that matches the antenna array dimensions. But I couldn’t imagine that it would happen already in 2021 when the 5G deployments are still in their infancy. With this in mind, it is challenging to speculate on what will come next. If the industry can already build 64 antenna-integrated radios into a box that weighs less than 20 kg, then one can certainly build even larger arrays, when there will be demand for that.
The only hint about the future that I picked up from my visit is that Ericsson already considers Massive MIMO technology and its evolutions to be natural parts of 6G solutions.
We have now released the 18th episode of the podcast Wireless Future, which is the last one in the first season (we are taking a summer break). The episode has the following abstract:
Many individuals are speculating about 6G, but in this episode, you will hear the joint vision of 700+ researchers at Ericsson. Erik G. Larsson and Emil Björnson are visited by Magnus Frodigh, Vice-President and Head of Ericsson Research. His team has recently published the white paper “Ever-present intelligent communication: A research outlook towards 6G”. The conversation covers emerging applications, new requirements, and research challenges that might define the 6G era. How can we achieve limitless connectivity? Which frequency bands will become important? What is a network compute fabric? What should students learn to take part in the 6G development? These are just some of the questions that are answered.
You can watch the video podcast on YouTube:
You can listen to the audio-only podcast at the following places:
Every few months, there is a new press release about how a mobile network operator has collaborated with a network vendor to set a new 5G data speed record. There is no doubt that carrier aggregation between the mid-band and mmWave band can deliver more than 5 Gbps. However, it is less clear what we would actually need such high speeds for. The majority of the data traffic in current networks is consumed by video streaming. Even if you stream a 4k resolution video, the codec doesn’t need more than 25 Mbps! Hence, 5G allows you to download an entire motion picture in a matter of seconds, but that goes against the main principle of video streaming, namely that the video is downloaded at the same pace as it is watched to alleviate the need for intermediate storage (apart from buffering). So what is the point of these high speeds? That is what I will explain in this blog post.
The mobile data traffic is growing by 25-50% per year, but the reason is not that we require higher data rates when using our devices. Instead, the main reason is that we are using our devices more frequently, thus the cellular networks must be evolved to manage the increasing accumulated data rate demand of the active devices.
In other words, our networks must be capable of multiplexing all the devices that want to be active simultaneously in peak hours. As the traffic grows, more devices can be multiplexed per km2 by either deploying more base stations that each can serve a certain number of devices, using more spectrum that can be divided between the devices, or using Massive MIMO technology for spatial multiplexing by beamforming.
The preferred multiplexing solution depends on the deployment cost and various local practicalities (e.g., the shape of the propagation environment and user distribution). For example, the main purpose of the new mmWave spectrum is not to continuously deliver 5 Gbps to a single user, but to share that traffic capacity between the many users in hotspots. If each user requires 25 Mbps, then 200 users can share a 5 Gbps capacity. So far, there are few deployments of that kind since Massive MIMO in the 3.5 GHz band has been deployed in the first 5G networks to deliver multi-gigabit accumulated data rates.
I believe that spatial multiplexing will continue to be the preferred solution in future network generations, while mmWave spectrum will mainly be utilized as a WiFi replacement in hotspots with many users and high service requirements. I am skeptical towards the claims that future networks must operate at higher carrier frequencies (e.g., THz bands); we don’t need more spectrum, we need better multiplexing capabilities and that can be achieved in other ways than taking a wide bandwidth and share it between the users. In the following video, I elaborate more on these things:
We have now released the 17th episode of the podcast Wireless Future, with the following abstract:
The wireless data traffic grows by 50% per year which implies that the energy consumption in the network equipment is also growing steadily. This raises both environmental and economic concerns. In this episode, Erik G. Larsson and Emil Björnson discuss how the wireless infrastructure can be made more energy-efficient. The conversation covers the basic data traffic characteristics and definition of energy efficiency, as well as what can be done when designing future network infrastructure, planning deployments, and developing efficient algorithms. To learn more, they recommend the IEEE 5G and Beyond Technology Roadmap article “Energy Efficiency” and also “Deploying Dense Networks for Maximal Energy Efficiency: Small Cells Meet Massive MIMO”.
You can watch the video podcast on YouTube:
You can listen to the audio-only podcast at the following places:
The 2021 IEEE GLOBECOM workshop on “Wireless Communications for Distributed Intelligence” will be held in Madrid, Spain, in December 2021. This workshop aims at investigating and re-defining the roles of wireless communications for decentralized Artificial Intelligence (AI) systems, including distributed sensing, information processing, automatic control, learning and inference.
We invite submissions of original works on the related topics, which include but are not limited to the following:
Network architecture and protocol design for AI-enabled 6G
Federated learning (FL) in wireless networks
Multi-agent reinforcement learning in wireless networks
Communication efficiency in distributed machine learning (ML)
Energy efficiency in distributed ML
Cross-layer (PHY, MAC, network layer) design for distributed ML
Wireless resource allocation for distributed ML
Signal processing for distributed ML
Over-the-air (OTA) computation for FL
Emerging PHY technologies for OTA FL
Privacy and security issues of distributed ML
Adversary-resilient distributed sensing, learning, and inference
Fault tolerance in distributed stochastic gradient descent (DSGD) systems
Fault tolerance in multi-agent systems
Fundamental limits of distributed ML with imperfect communication
We have now released the 16th episode of the podcast Wireless Future, with the following abstract:
The research community’s hype around 5G has quickly shifted to hyping the next big thing: 6G. This raises many questions: Did 5G become as revolutionary as previously claimed? Which physical-layer aspects remain to be improved in 6G? To discuss these things, Erik G. Larsson and Emil Björnson are visited by Professor Angel Lozano, author of the seminal papers “What will 5G be?” and “Is the PHY layer dead?”. The conversation covers the practical and physical limits in communications, the role of machine learning, the relation between academia and industry, and whether we have got lost in asymptotic analysis. Please visit Angel’s website.
You can watch the video podcast on YouTube:
You can listen to the audio-only podcast at the following places: