Category Archives: Commentary

Is 5G a Failed Technology?

Four years ago, I reviewed the book “The 5G Myth” by William Webb. The author described how the telecom industry was developing a 5G technology that addresses the wrong issues; for example, higher peak rates and other things that are barely needed and seldom reached in practice. Instead, he argued that a more consistent connectivity quality should be the goal for the future. I generally agree with his criticism of the 5G visions that one heard at conferences at the time, even if I noted in my review that the argumentation in the book sometimes was questionable. In particular, the book propagated several myths about the MIMO technology.

Webb wrote a blog post earlier this year where he continues to criticize 5G, this time in the form of analyzing whether the 5G visions have been achieved. His main conclusion is that “5G is a long way from delivering on the original promises”, thereby implying that 5G is a failed technology. While the facts that Webb is referring to in his blog post are indisputable, the main issue with his argumentation is that what he calls the “original promises” refer to the long-term visions that were presented by a few companies around 2013 and not the actual 5G requirements by the ITU. Moreover, it is way too early to tell if 5G will reach its goals or not.

Increasing data volumes

Let us start by discussing the mobile data volumes. Webb is saying that 5G promised to increase them by 1000 times. According to the Ericsson Mobility Report, it has grown in North America from 1 to 4 EB/month between 2015 and 2020. This corresponds to an annual increase of 32%. This growth is created by a gradual increase in demand for wireless data, which has been enabled by a gradual deployment of new sites and an upgrade of existing sites. Looking ahead, the Mobility Report predicts another 3.5 times growth over the next 5 years, corresponding to an annual increase of 28%. These predictions have been fairly stable over the last few years. The point I want to make is that one cannot expect 5G to drastically change the data volumes from one day to the next, but the goal of the technological evolution is to support and sustain the long-term growth in data traffic, likely being at around 30% per year in the foreseeable future. Whether 5G will enable this or not is too early to tell, because we have only had the opportunity to observe one year of 5G utilization, at a few markets.

The mobile data traffic in North America according to the Ericsson Mobility Report.

Importantly, the 5G requirements defined by ITU don’t contain any relative targets when it comes to how large the increase in data volumes should be over 4G. The 1000 times increase, that Webb is referring to, originates from a 2012 white paper by Qualcomm. This paper discusses (in general terms) how to overcome the “1000x mobile data challenge” without saying that 5G alone should achieve it or what the exact time frame would be. Nokia presented a similar vision in 2013. I have used the 1000x number in several talks, including a popular YouTube video. However, the goal of the discussion has only been to explain how we can build networks that support 1000 times higher data volumes, not to claim that the demand will grow by such an immense factor any time soon. Even if the traffic would suddenly start to double every year, it takes 10 years to reach a 1000x higher traffic than today.

The current state of 5G

The 5G deployments have so far been utilizing Massive MIMO technology in the 3 GHz band. This is a technology for managing higher data volumes by spatial multiplexing of many users, thus it is only when the traffic increases that we can actually measure how well the technology performs. Wireless data isn’t a fixed resource that we can allocate as we like between the users, but the data volume depends on the number of multiplexed users and their respective propagation conditions. However, field trials have shown that the technology delivers on the promise of achieving much higher spectral efficiencies.

When it comes to higher peak data rates, there are indeed ITU targets that one can compare between network generations. The 4G target was 1 Gbps, while it is 20 Gbps in 5G. The latter number is supposed to be achieved using 1 GHz of spectrum in the mmWave bands. The high-band 5G deployments are still in their infancy, but Verizon has at least reached 5 Gbps in their network.

To be fair, Webb is providing a disclaimer in his blog post saying that his analysis is based on the current state of 5G, where mmWave is barely used. My point is that it is too early to conclude whether 5G will reach any of its targets since we are just in the first phase of 5G deployments. Most of the new features, including lower latency, higher peak rates, and massive IoT connectivity aren’t suppose to be supported until much later. Moreover, the consistent connectivity paradigm that Webb pushed for in his book, is what 5G might deliver using cell-free implementations, for example, using the Ericsson concept of radio stripes.

Webb makes one more conclusion in his blog post: “4G was actually more revolutionary than 5G.” This might be true in the sense that it was the first mobile broadband generation to be utilized in large parts of the world. While the data volumes have “only” increased by 30% per year in North America in the last decade, the growth rate has been truly revolutionary in developing parts of the world (e.g., +120% per year in India). There is a hope that 5G will eventually be the platform that enables the digitalization of society and new industries, including autonomous cars and factories. The future will tell whether those visions will materialize or not, and whether it will be a revolution or an evolution.

Is 5G a failed technology?

That is too early to tell since the 5G visions have focused on the long-term perspective, but so far I think it progresses as planned. When discussing these matters, it is important to evaluate 5G against the ITU requirements and not the (potentially) over-optimistic visions from individual researchers or companies. As Bill Gates once put it:

We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten. ”

It is All About Multiplexing

Every few months, there is a new press release about how a mobile network operator has collaborated with a network vendor to set a new 5G data speed record. There is no doubt that carrier aggregation between the mid-band and mmWave band can deliver more than 5 Gbps. However, it is less clear what we would actually need such high speeds for. The majority of the data traffic in current networks is consumed by video streaming. Even if you stream a 4k resolution video, the codec doesn’t need more than 25 Mbps! Hence, 5G allows you to download an entire motion picture in a matter of seconds, but that goes against the main principle of video streaming, namely that the video is downloaded at the same pace as it is watched to alleviate the need for intermediate storage (apart from buffering). So what is the point of these high speeds? That is what I will explain in this blog post.

The mobile data traffic is growing by 25-50% per year, but the reason is not that we require higher data rates when using our devices. Instead, the main reason is that we are using our devices more frequently, thus the cellular networks must be evolved to manage the increasing accumulated data rate demand of the active devices.

In other words, our networks must be capable of multiplexing all the devices that want to be active simultaneously in peak hours. As the traffic grows, more devices can be multiplexed per km2 by either deploying more base stations that each can serve a certain number of devices, using more spectrum that can be divided between the devices, or using Massive MIMO technology for spatial multiplexing by beamforming.

The preferred multiplexing solution depends on the deployment cost and various local practicalities (e.g., the shape of the propagation environment and user distribution). For example, the main purpose of the new mmWave spectrum is not to continuously deliver 5 Gbps to a single user, but to share that traffic capacity between the many users in hotspots. If each user requires 25 Mbps, then 200 users can share a 5 Gbps capacity. So far, there are few deployments of that kind since Massive MIMO in the 3.5 GHz band has been deployed in the first 5G networks to deliver multi-gigabit accumulated data rates.

I believe that spatial multiplexing will continue to be the preferred solution in future network generations, while mmWave spectrum will mainly be utilized as a WiFi replacement in hotspots with many users and high service requirements. I am skeptical towards the claims that future networks must operate at higher carrier frequencies (e.g., THz bands); we don’t need more spectrum, we need better multiplexing capabilities and that can be achieved in other ways than taking a wide bandwidth and share it between the users. In the following video, I elaborate more on these things:

IEEE Globecom Workshop on Wireless Communications for Distributed Intelligence

The 2021 IEEE GLOBECOM workshop on “Wireless Communications for Distributed Intelligence” will be held in Madrid, Spain, in December 2021. This workshop aims at investigating and re-defining the roles of wireless communications for decentralized Artificial Intelligence (AI) systems, including distributed sensing, information processing, automatic control, learning and inference.

We invite submissions of original works on the related topics, which include but are not limited to the following:

  • Network architecture and protocol design for AI-enabled 6G
  • Federated learning (FL) in wireless networks
  • Multi-agent reinforcement learning in wireless networks
  • Communication efficiency in distributed machine learning (ML)
  • Energy efficiency in distributed ML
  • Cross-layer (PHY, MAC, network layer) design for distributed ML
  • Wireless resource allocation for distributed ML
  • Signal processing for distributed ML
  • Over-the-air (OTA) computation for FL
  • Emerging PHY technologies for OTA FL
  • Privacy and security issues of distributed ML
  • Adversary-resilient distributed sensing, learning, and inference
  • Fault tolerance in distributed stochastic gradient descent (DSGD) systems
  • Fault tolerance in multi-agent systems
  • Fundamental limits of distributed ML with imperfect communication

Massive MIMO Becomes Less Massive and More Open

The name “Massive MIMO” has been debated since its inception. Tom Marzetta introduced it ten years ago as one of several potential names for his envisioned MIMO technology with a very large number of antennas. Different researchers used different terminologies in their papers during the first years of research on the topic, but the community eventually converged to calling it Massive MIMO.

The apparent issue with that terminology is that the adjective “massive” can have different meanings. The first definition in the Merriam-Webster dictionary is “consisting of a large mass”, in the sense of being “bulky” and “heavy”. The second definition is “large in scope or degree”, in the sense of being “large in comparison to what is typical”.

It is probably the second definition that Marzetta had in mind when introducing the name “Massive MIMO”; that is, a MIMO technology with a number of antennas that is large in comparison to what was typically considered in the 4G era. Yet, there has been a perception in the industry that one cannot build a base station with many antennas without it also being bulky and heavy (i.e., the first definition).

Massive MIMO products are not heavy anymore

Ericsson and Huawei have recently proved that this perception is wrong. The Ericsson AIR 6419 that was announced in February (to be released later this year) contains 64 antenna-integrated radios in a box that is roughly 1 x 0.5 m, with a weight of only 20 kg. This can be compared with Ericsson’s first Massive MIMO product from 2018, which weighed 60 kg. The product is designed for the 3.5 GHz band, supports 200 MHz of bandwidth, and 320 W of output power. The box contains an application-specific integrated circuit (ASIC) that handles parts of the baseband processing. Huawei introduced a similar product in February that weighs 19 kg and supports 400 MHz of spectrum, but there are fewer details available regarding it.

The new Ericsson AIR 6419 only weighs 20 kg, thus it can be deployed by a single person. (Photo from Ericsson.)

These products seem very much in line with what Massive MIMO researchers like me have been imagining when writing scientific papers. It is impressive to see how quickly this vision has turned into reality, and how 5G has become synonymous with Massive MIMO deployments in sub-6 GHz bands, despite all the fuss about small cells with mmWave spectrum. While both technologies can be used to support higher traffic loads, it is clear that spatial multiplexing has now become the primary solution adopted by network operators in the 5G era.

Open RAN enabled Massive MIMO

While the new Ericsson and Huawei products demonstrate how a tight integration of antennas, radios, and baseband processing enables compact, low-weight Massive MIMO implementation, there is also an opposite trend. Mavenir and Xilinx have teamed up to build a Massive MIMO solution that builds on the Open RAN principle of decoupling hardware and software (so that the operator can buy these from different vendors). They claim that their first 64-antenna product, which combines Xilinx’s radio hardware with Mavenir’s cloud-computing platform, will be available by the end of this year. The drawback with the hardware-software decoupling is the higher energy consumption caused by increased fronthaul signaling (when all processing is done “in the cloud”) and the use of field-programmable gate arrays (FPGAs) instead of ASICs (since a higher level of flexibility is needed in the processing units when these are not co-designed with the radios).

Since the 5G technology is still in its infancy, it will be exciting to see how it evolves over the coming years. I believe we will see even larger antenna numbers in the 3.5 GHz band, new array form factors, products that integrate many frequency bands in the same box, digital beamforming in mmWave bands, and new types of distributed antenna deployments. The impact of Massive MIMO will be massive, even if the weight isn’t massive.

Episode 15: Wireless for Machine Learning (with Carlo Fischione)

We have now released the 15th episode of the podcast Wireless Future, with the following abstract:

Machine learning builds on the collection and processing of data. Since the data often are collected by mobile phones or internet-of-things devices, they must be transferred wirelessly to enable machine learning. In this episode, Emil Björnson and Erik G. Larsson are visited by Carlo Fischione, a Professor at the KTH Royal Institute of Technology. The conversation circles around distributed machine learning and how the wireless technology can evolve to support learning applications via network slicing, information-aware communication, and over-the-air computation. To learn more, they recommend the article “Wireless for Machine Learning”. Please visit Carlo’s website and the Machine Learning for Communications ETI website.

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

Episode 14: Q/A on MIMO, NOMA, and THz Communications

We have now released the 14th episode of the podcast Wireless Future, with the following abstract:

In this episode, Emil Björnson and Erik G. Larsson answer questions from the listeners on the topics of distributed MIMO, THz communications, and non-orthogonal multiple access (NOMA). Some examples are: Is cell-free massive MIMO really a game-changer? What would be its first use case? Can visible light communications be used to reach 1 terabit/s? Will Massive MIMO have a role to play in THz communications? What kind of synchronization and power constraints appear in NOMA systems? Please continue asking questions and we might answer them in later episodes!

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

Is There a Future for NOMA?

One of the physical-layer technologies that have received a lot of attention from the research community in recent years is called Non-Orthogonal Multiple Access (NOMA). For instance, it has been called “A Paradigm Shift for Multiple Access for 5G and Beyond“.

In NOMA, the users are assigned to the same time-frequency resource and instead separated in the power or code domains.

The core idea of NOMA is to assign the same time-frequency resource to multiple users, and instead (partially) separate the users in the power or code domain. This is illustrated in the figure and stands in contrast to the classic approach of assigning orthogonal resources to the users, which was done in 4G using Orthogonal Frequency-Division Multiple Access (OFDMA) and in 3G using orthogonal spreading codes. The benefit of the non-orthogonality is that the sum spectral efficiency (bit/s/Hz/cell) can be increased, if the increased interference can be dealt with using clever signal processing, such as successive interference cancelation. But from a practical standpoint, it matters a lot if the performance gain is 1% or 1000% (~10x). The former is negligible while the latter would constitute a paradigm shift.

Massive MIMO is also based on non-orthogonal access

The use of many antennas has become a natural part of 5G. When having an antenna array, the users can be spatially multiplexed, instead of assigned to orthogonal time-frequency resources. This is what we call Massive MIMO and it is a non-orthogonal multiple access scheme; if you direct a spatial beam towards each user, there will be interference leakage between the beams. MIMO schemes have been around for decades, for example, under the name spatial division multiple access (SDMA). There is both experimental and theoretical evidence that the widespread support for Massive MIMO in 5G is a paradigm shift when it comes to spectral efficiency, but nevertheless, it is not what most papers refer to when using the NOMA terminology.

Instead, the NOMA literature focuses on another aspect of the non-orthogonality: joint decoding of the interfering signals. It is known in information theory that weakly interfering signals should be treated as noise, while strongly interfering signals should be decoded jointly with the desired signal (or using successive interference cancelation). Hence, the methods considered in the NOMA literature are mainly effective in systems with strongly interfering signals.

Since Massive MIMO is used in 5G from the beginning, while NOMA remains to be standardized, a natural question arises:

Do we need other non-orthogonal access schemes than Massive MIMO in 5G?

One of the key motivating factors for Massive MIMO is the favorable propagation, which basically means that the base station has sufficiently many antennas to beamform so that the users’ channels become nearly orthogonal. One can think of it as transmitting narrow beams that lead to low interference leakage. Under these conditions, there are no strongly interfering signals, which implies that no additional NOMA features are needed to deal with the interference. We have shown this analytically in two papers: one about power-domain NOMA and a new one about code-domain NOMA.

Although these papers show that NOMA can usually not improve the sum spectral efficiency, there are indeed some special cases when it can. In particular, this happens in situations when the number of antennas is insufficient to achieve favorable propagation. This can, for example‚ happen in line-of-sight scenarios where the users are closely spaced and therefore have very similar channels. However, in my experience, the NOMA gains are marginal also in these cases. When writing the two papers mentioned above, we had to spend much time on parameter tuning to find the cases where NOMA could provide meaningful improvements. With this in mind, it is fully plausible that NOMA will never be used in 5G, at least not for increasing the spectral efficiency (it could be useful for other purposes, such as grant-free access).

What about beyond 5G systems?

When it became clear that NOMA wouldn’t play any big role in 5G, the research focus has shifted towards beyond 5G systems. One of the prominent new advances on non-orthogonal access is called rate splitting. The recent paper “Is NOMA Efficient in Multi-Antenna Networks?” provides a pedagogical overview. The paper also makes a case for that rate splitting methods combines the best aspects of conventional NOMA and Massive MIMO, in a way that guarantees a higher sum spectral efficiency. While it is true that a well-designed rate splitting system can never be worse than conventional Massive MIMO with linear processing, the key question is: how large performance gains can be achieved?

In the overview paper, the case for rate splitting is based on multiplexing gain analysis. This means that the sum spectral efficiency (bit/s/Hz) is studied when the transmit power P is asymptotically large. Different access schemes will achieve different spectral efficiencies, but they all behave as M log2(P)+C, where the factor M is the multiplexing gain and C is a constant. When P is large, the scheme that achieves the largest multiplexing gain is guaranteed to give the largest spectral efficiency, irrespective of the value of C.

If the channels are known perfectly, then a single-cell Massive MIMO system achieves the maximum multiplexing gain (it is equal to the minimum of the total number of transmit antennas and the total number of receive antennas). However, if the channels are known imperfectly, then the multiplexing gain is reduced when using linear processing and the above-mentioned paper shows that the rate splitting approach added to achieve a larger multiplexing gain than conventional Massive MIMO. This is mathematically correct, but there is one catch: the power used for channel estimation is assumed to grow more slowly than the power P used for data transmission. However, in practice, we could use the same power for both estimation and data transmission; hence, in the large-P regime considered in the multiplexing gain analysis, we will have perfect channel knowledge. Rate splitting cannot increase the multiplexing gain in that case.

That said, rate splitting can still improve the sum spectral efficiency compared to Massive MIMO in practical setups (at least it cannot be worse), but we should not expect any paradigm shift. Massive MIMO is already utilizing the multiplexing gain to push the spectral efficiency to new heights in 5G. Further improvements are possible by increasing the number of antennas, while it cannot be achieved by refining the access scheme. That could only increase the parameter C, not M.

If you want to learn more about NOMA and rate splitting, I recommend the following episode of our podcast: