Category Archives: 5G

Episode 18: Ever-Present Intelligent 6G Communications (with Magnus Frodigh)

We have now released the 18th episode of the podcast Wireless Future, which is the last one in the first season (we are taking a summer break). The episode has the following abstract:

Many individuals are speculating about 6G, but in this episode, you will hear the joint vision of 700+ researchers at Ericsson. Erik G. Larsson and Emil Björnson are visited by Magnus Frodigh, Vice-President and Head of Ericsson Research. His team has recently published the white paper “Ever-present intelligent communication: A research outlook towards 6G”. The conversation covers emerging applications, new requirements, and research challenges that might define the 6G era. How can we achieve limitless connectivity? Which frequency bands will become important? What is a network compute fabric? What should students learn to take part in the 6G development? These are just some of the questions that are answered.

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

It is All About Multiplexing

Every few months, there is a new press release about how a mobile network operator has collaborated with a network vendor to set a new 5G data speed record. There is no doubt that carrier aggregation between the mid-band and mmWave band can deliver more than 5 Gbps. However, it is less clear what we would actually need such high speeds for. The majority of the data traffic in current networks is consumed by video streaming. Even if you stream a 4k resolution video, the codec doesn’t need more than 25 Mbps! Hence, 5G allows you to download an entire motion picture in a matter of seconds, but that goes against the main principle of video streaming, namely that the video is downloaded at the same pace as it is watched to alleviate the need for intermediate storage (apart from buffering). So what is the point of these high speeds? That is what I will explain in this blog post.

The mobile data traffic is growing by 25-50% per year, but the reason is not that we require higher data rates when using our devices. Instead, the main reason is that we are using our devices more frequently, thus the cellular networks must be evolved to manage the increasing accumulated data rate demand of the active devices.

In other words, our networks must be capable of multiplexing all the devices that want to be active simultaneously in peak hours. As the traffic grows, more devices can be multiplexed per km2 by either deploying more base stations that each can serve a certain number of devices, using more spectrum that can be divided between the devices, or using Massive MIMO technology for spatial multiplexing by beamforming.

The preferred multiplexing solution depends on the deployment cost and various local practicalities (e.g., the shape of the propagation environment and user distribution). For example, the main purpose of the new mmWave spectrum is not to continuously deliver 5 Gbps to a single user, but to share that traffic capacity between the many users in hotspots. If each user requires 25 Mbps, then 200 users can share a 5 Gbps capacity. So far, there are few deployments of that kind since Massive MIMO in the 3.5 GHz band has been deployed in the first 5G networks to deliver multi-gigabit accumulated data rates.

I believe that spatial multiplexing will continue to be the preferred solution in future network generations, while mmWave spectrum will mainly be utilized as a WiFi replacement in hotspots with many users and high service requirements. I am skeptical towards the claims that future networks must operate at higher carrier frequencies (e.g., THz bands); we don’t need more spectrum, we need better multiplexing capabilities and that can be achieved in other ways than taking a wide bandwidth and share it between the users. In the following video, I elaborate more on these things:

Episode 17: Energy-Efficient Communications

We have now released the 17th episode of the podcast Wireless Future, with the following abstract:

The wireless data traffic grows by 50% per year which implies that the energy consumption in the network equipment is also growing steadily. This raises both environmental and economic concerns. In this episode, Erik G. Larsson and Emil Björnson discuss how the wireless infrastructure can be made more energy-efficient. The conversation covers the basic data traffic characteristics and definition of energy efficiency, as well as what can be done when designing future network infrastructure, planning deployments, and developing efficient algorithms. To learn more, they recommend the IEEE 5G and Beyond Technology Roadmap article “Energy Efficiency” and also “Deploying Dense Networks for Maximal Energy Efficiency: Small Cells Meet Massive MIMO”.

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

Episode 16: 6G and the Physical Layer (with Angel Lozano)

We have now released the 16th episode of the podcast Wireless Future, with the following abstract:

The research community’s hype around 5G has quickly shifted to hyping the next big thing: 6G. This raises many questions: Did 5G become as revolutionary as previously claimed? Which physical-layer aspects remain to be improved in 6G? To discuss these things, Erik G. Larsson and Emil Björnson are visited by Professor Angel Lozano, author of the seminal papers “What will 5G be?” and “Is the PHY layer dead?”. The conversation covers the practical and physical limits in communications, the role of machine learning, the relation between academia and industry, and whether we have got lost in asymptotic analysis. Please visit Angel’s website.

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

Massive MIMO Becomes Less Massive and More Open

The name “Massive MIMO” has been debated since its inception. Tom Marzetta introduced it ten years ago as one of several potential names for his envisioned MIMO technology with a very large number of antennas. Different researchers used different terminologies in their papers during the first years of research on the topic, but the community eventually converged to calling it Massive MIMO.

The apparent issue with that terminology is that the adjective “massive” can have different meanings. The first definition in the Merriam-Webster dictionary is “consisting of a large mass”, in the sense of being “bulky” and “heavy”. The second definition is “large in scope or degree”, in the sense of being “large in comparison to what is typical”.

It is probably the second definition that Marzetta had in mind when introducing the name “Massive MIMO”; that is, a MIMO technology with a number of antennas that is large in comparison to what was typically considered in the 4G era. Yet, there has been a perception in the industry that one cannot build a base station with many antennas without it also being bulky and heavy (i.e., the first definition).

Massive MIMO products are not heavy anymore

Ericsson and Huawei have recently proved that this perception is wrong. The Ericsson AIR 6419 that was announced in February (to be released later this year) contains 64 antenna-integrated radios in a box that is roughly 1 x 0.5 m, with a weight of only 20 kg. This can be compared with Ericsson’s first Massive MIMO product from 2018, which weighed 60 kg. The product is designed for the 3.5 GHz band, supports 200 MHz of bandwidth, and 320 W of output power. The box contains an application-specific integrated circuit (ASIC) that handles parts of the baseband processing. Huawei introduced a similar product in February that weighs 19 kg and supports 400 MHz of spectrum, but there are fewer details available regarding it.

The new Ericsson AIR 6419 only weighs 20 kg, thus it can be deployed by a single person. (Photo from Ericsson.)

These products seem very much in line with what Massive MIMO researchers like me have been imagining when writing scientific papers. It is impressive to see how quickly this vision has turned into reality, and how 5G has become synonymous with Massive MIMO deployments in sub-6 GHz bands, despite all the fuss about small cells with mmWave spectrum. While both technologies can be used to support higher traffic loads, it is clear that spatial multiplexing has now become the primary solution adopted by network operators in the 5G era.

Open RAN enabled Massive MIMO

While the new Ericsson and Huawei products demonstrate how a tight integration of antennas, radios, and baseband processing enables compact, low-weight Massive MIMO implementation, there is also an opposite trend. Mavenir and Xilinx have teamed up to build a Massive MIMO solution that builds on the Open RAN principle of decoupling hardware and software (so that the operator can buy these from different vendors). They claim that their first 64-antenna product, which combines Xilinx’s radio hardware with Mavenir’s cloud-computing platform, will be available by the end of this year. The drawback with the hardware-software decoupling is the higher energy consumption caused by increased fronthaul signaling (when all processing is done “in the cloud”) and the use of field-programmable gate arrays (FPGAs) instead of ASICs (since a higher level of flexibility is needed in the processing units when these are not co-designed with the radios).

Since the 5G technology is still in its infancy, it will be exciting to see how it evolves over the coming years. I believe we will see even larger antenna numbers in the 3.5 GHz band, new array form factors, products that integrate many frequency bands in the same box, digital beamforming in mmWave bands, and new types of distributed antenna deployments. The impact of Massive MIMO will be massive, even if the weight isn’t massive.

Episode 15: Wireless for Machine Learning (with Carlo Fischione)

We have now released the 15th episode of the podcast Wireless Future, with the following abstract:

Machine learning builds on the collection and processing of data. Since the data often are collected by mobile phones or internet-of-things devices, they must be transferred wirelessly to enable machine learning. In this episode, Emil Björnson and Erik G. Larsson are visited by Carlo Fischione, a Professor at the KTH Royal Institute of Technology. The conversation circles around distributed machine learning and how the wireless technology can evolve to support learning applications via network slicing, information-aware communication, and over-the-air computation. To learn more, they recommend the article “Wireless for Machine Learning”. Please visit Carlo’s website and the Machine Learning for Communications ETI website.

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

Episode 14: Q/A on MIMO, NOMA, and THz Communications

We have now released the 14th episode of the podcast Wireless Future, with the following abstract:

In this episode, Emil Björnson and Erik G. Larsson answer questions from the listeners on the topics of distributed MIMO, THz communications, and non-orthogonal multiple access (NOMA). Some examples are: Is cell-free massive MIMO really a game-changer? What would be its first use case? Can visible light communications be used to reach 1 terabit/s? Will Massive MIMO have a role to play in THz communications? What kind of synchronization and power constraints appear in NOMA systems? Please continue asking questions and we might answer them in later episodes!

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places: