Category Archives: Commentary

Episode 26: Network Slicing

We have now released the 26th episode of the podcast Wireless Future. It has the following abstract:

In the near future, we will be able to deploy new wireless networks without installing new physical infrastructure. The networks will instead be virtualized on shared hardware using the new concept of network slicing. This will enable tailored wireless services for businesses, entertainment, and devices with special demands. In this episode, Erik G. Larsson and Emil Björnson discuss why we need multiple virtual networks, what the practical services might be, who will pay for it, and whether the concept might break net neutrality. The episode starts with a continued discussion on the usefulness of models, based on feedback from listeners regarding Episode 25. The network slicing topic starts after 10 minutes. 

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

Episode 25: What Models are Useful?

We have now released the 25th episode of the podcast Wireless Future. It has the following abstract:

The statistician George Box famously said that “All models are wrong, but some are useful”. In this episode, Emil Björnson and Erik G. Larsson discuss what models are useful in the context of wireless communications, and for what purposes. The conversation covers modeling of wireless propagation, noise, hardware, and wireless traffic. A key message is that the modeling requirements are different for algorithmic development and for performance evaluation.

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

Episode 24: Q&A With 5G and 6G Predictions

We have now released the 24nd episode of the podcast Wireless Future, which is a New Year’s special! It has the following abstract:

In this episode, Emil Björnson and Erik G. Larsson answer ten questions from the listeners. The common theme is predictions of how 5G will evolve and which technologies will be important in 6G. The specific questions: Will Moore’s law or Edholm’s law break down first? How important will integrated communication and sensing become? When will private 5G networks start to appear? Will reconfigurable intelligent surfaces be a key enabler of 6G? How can we manage the computational complexity in large-aperture Massive MIMO? Will machine learning be the game-changer in 6G? What is 5G Dynamic Spectrum Sharing? What does the convergence of the Shannon and Maxwell theories imply? What happened to device-to-device communications, is it an upcoming 5G feature? Will full-duplex radios be adopted in the future? If you have a question or idea for a future topic, please share it as a comment to the YouTube version of this episode.

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

Do We Need More Bandwidth?

The bit rate (bit/s) is so tightly connected with the bandwidth (Hz) that the computer science community uses these words interchangeably. This makes good sense when considering fixed communication channels (e.g., cables) for which these quantities are proportional with the bandwidth efficiency (bit/s/Hz) being the proportionality constant. However, when dealing with time-varying wireless channels, the spectral efficiency can vary by orders-of-magnitude depending on the propagation conditions (e.g., between cell center and cell edge), which weakens the connection between the rate and bandwidth.

The peak rate of a 4G device can reach 1 Gbit/s and 5G devices are expected to reach 20 Gbit/s. These numbers greatly surpass the need both for the most demanding contemporary use cases, such as the 25 Mbit/s required by 4k video streaming, and for envisioned virtual reality applications that might require a few hundred Mbit/s. One can certainly imagine other futuristic applications that are more demanding, but since there is a limit to how much information the human perception system can process in real-time, these are typically “data shower” situations where a huge dataset must be momentarily transferred to/from a device for later utilization or processing. I think it is fair to say that future networks cannot be built primarily for such niche applications, thus I made the following one-minute video claiming that wireless doesn’t need more bandwidth but higher efficiency, so that we can deliver bit rates close to the current peak rates most of the time instead of under ideal circumstances.

Why are people talking about THz communications?

The spectral bandwidth has increased with every wireless generation so naturally, the same thing will happen in 6G. This is the kind of argument that you might hear from proponents of (sub-)THz communications, which is synonymous with operating at carrier frequencies beyond 100 GHz where huge bandwidths are available for utilization. The main weakness with this argument is that increasing the bandwidth has never been the main goal of wireless development but only a convenient way to increase the data rate.

As the wireless data traffic continues to increase, the main contributing factor will not be that our devices require much higher instantaneous rates when they are active, but that more devices are active more often. Hence, I believe the most important performance metric is the maximum traffic capacity measured in bit/s/km2, which describes the accumulated traffic that the active devices can generate in a given area.

The traffic capacity is determined by three main factors:

  1. The number of spatially multiplexed devices:
  2. The bandwidth efficiency per device; and
  3. The bandwidth.

We can certainly improve this metric by using more bandwidth, but it is not the only way and it mainly helps users that have good channel conditions. The question that researchers need to ask is: What is the preferred way to increase the traffic capacity from a technical, economical, and practical perspective?

I don’t think we have a conclusive answer to this yet, but it is important to remember that even if the laws of nature stay constant, the preferred solution can change with time. A concrete example is the development of processors, for which the main computing performance metric is the floating-point operations per second (FLOPS). Improving this metric used to be synonymous with increasing the clock speed, but this trend has now been replaced with increasing the number of cores and using parallel threads because it leads to more power- and heat-efficient solutions than increasing the clock speed beyond the current range.

The corresponding development in wireless communications would be to stop increasing the bandwidth (which determines the sampling rate of the signals and the clock speed needed for processing) and instead focus on multiplexing many data streams, which take the role of the threads in this analogy, and balancing the bandwidth efficiency between the streams. The following video describes my thoughts on how to develop wireless technology in that direction:

As a final note, as the traffic capacity in wireless networks increase, there will be some point-to-point links that require huge capacity. This is particularly the case between an access point and the core network. These links will eventually require cables or wireless technologies that can handle many Tbit/s and the wireless option will then require THz communications. The points that I make above apply to the wireless links at the edge, between devices and access points, not to the backhaul infrastructure.

Is 5G a Failed Technology?

Four years ago, I reviewed the book “The 5G Myth” by William Webb. The author described how the telecom industry was developing a 5G technology that addresses the wrong issues; for example, higher peak rates and other things that are barely needed and seldom reached in practice. Instead, he argued that a more consistent connectivity quality should be the goal for the future. I generally agree with his criticism of the 5G visions that one heard at conferences at the time, even if I noted in my review that the argumentation in the book sometimes was questionable. In particular, the book propagated several myths about the MIMO technology.

Webb wrote a blog post earlier this year where he continues to criticize 5G, this time in the form of analyzing whether the 5G visions have been achieved. His main conclusion is that “5G is a long way from delivering on the original promises”, thereby implying that 5G is a failed technology. While the facts that Webb is referring to in his blog post are indisputable, the main issue with his argumentation is that what he calls the “original promises” refer to the long-term visions that were presented by a few companies around 2013 and not the actual 5G requirements by the ITU. Moreover, it is way too early to tell if 5G will reach its goals or not.

Increasing data volumes

Let us start by discussing the mobile data volumes. Webb is saying that 5G promised to increase them by 1000 times. According to the Ericsson Mobility Report, it has grown in North America from 1 to 4 EB/month between 2015 and 2020. This corresponds to an annual increase of 32%. This growth is created by a gradual increase in demand for wireless data, which has been enabled by a gradual deployment of new sites and an upgrade of existing sites. Looking ahead, the Mobility Report predicts another 3.5 times growth over the next 5 years, corresponding to an annual increase of 28%. These predictions have been fairly stable over the last few years. The point I want to make is that one cannot expect 5G to drastically change the data volumes from one day to the next, but the goal of the technological evolution is to support and sustain the long-term growth in data traffic, likely being at around 30% per year in the foreseeable future. Whether 5G will enable this or not is too early to tell, because we have only had the opportunity to observe one year of 5G utilization, at a few markets.

The mobile data traffic in North America according to the Ericsson Mobility Report.

Importantly, the 5G requirements defined by ITU don’t contain any relative targets when it comes to how large the increase in data volumes should be over 4G. The 1000 times increase, that Webb is referring to, originates from a 2012 white paper by Qualcomm. This paper discusses (in general terms) how to overcome the “1000x mobile data challenge” without saying that 5G alone should achieve it or what the exact time frame would be. Nokia presented a similar vision in 2013. I have used the 1000x number in several talks, including a popular YouTube video. However, the goal of the discussion has only been to explain how we can build networks that support 1000 times higher data volumes, not to claim that the demand will grow by such an immense factor any time soon. Even if the traffic would suddenly start to double every year, it takes 10 years to reach a 1000x higher traffic than today.

The current state of 5G

The 5G deployments have so far been utilizing Massive MIMO technology in the 3 GHz band. This is a technology for managing higher data volumes by spatial multiplexing of many users, thus it is only when the traffic increases that we can actually measure how well the technology performs. Wireless data isn’t a fixed resource that we can allocate as we like between the users, but the data volume depends on the number of multiplexed users and their respective propagation conditions. However, field trials have shown that the technology delivers on the promise of achieving much higher spectral efficiencies.

When it comes to higher peak data rates, there are indeed ITU targets that one can compare between network generations. The 4G target was 1 Gbps, while it is 20 Gbps in 5G. The latter number is supposed to be achieved using 1 GHz of spectrum in the mmWave bands. The high-band 5G deployments are still in their infancy, but Verizon has at least reached 5 Gbps in their network.

To be fair, Webb is providing a disclaimer in his blog post saying that his analysis is based on the current state of 5G, where mmWave is barely used. My point is that it is too early to conclude whether 5G will reach any of its targets since we are just in the first phase of 5G deployments. Most of the new features, including lower latency, higher peak rates, and massive IoT connectivity aren’t suppose to be supported until much later. Moreover, the consistent connectivity paradigm that Webb pushed for in his book, is what 5G might deliver using cell-free implementations, for example, using the Ericsson concept of radio stripes.

Webb makes one more conclusion in his blog post: “4G was actually more revolutionary than 5G.” This might be true in the sense that it was the first mobile broadband generation to be utilized in large parts of the world. While the data volumes have “only” increased by 30% per year in North America in the last decade, the growth rate has been truly revolutionary in developing parts of the world (e.g., +120% per year in India). There is a hope that 5G will eventually be the platform that enables the digitalization of society and new industries, including autonomous cars and factories. The future will tell whether those visions will materialize or not, and whether it will be a revolution or an evolution.

Is 5G a failed technology?

That is too early to tell since the 5G visions have focused on the long-term perspective, but so far I think it progresses as planned. When discussing these matters, it is important to evaluate 5G against the ITU requirements and not the (potentially) over-optimistic visions from individual researchers or companies. As Bill Gates once put it:

We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten. ”

It is All About Multiplexing

Every few months, there is a new press release about how a mobile network operator has collaborated with a network vendor to set a new 5G data speed record. There is no doubt that carrier aggregation between the mid-band and mmWave band can deliver more than 5 Gbps. However, it is less clear what we would actually need such high speeds for. The majority of the data traffic in current networks is consumed by video streaming. Even if you stream a 4k resolution video, the codec doesn’t need more than 25 Mbps! Hence, 5G allows you to download an entire motion picture in a matter of seconds, but that goes against the main principle of video streaming, namely that the video is downloaded at the same pace as it is watched to alleviate the need for intermediate storage (apart from buffering). So what is the point of these high speeds? That is what I will explain in this blog post.

The mobile data traffic is growing by 25-50% per year, but the reason is not that we require higher data rates when using our devices. Instead, the main reason is that we are using our devices more frequently, thus the cellular networks must be evolved to manage the increasing accumulated data rate demand of the active devices.

In other words, our networks must be capable of multiplexing all the devices that want to be active simultaneously in peak hours. As the traffic grows, more devices can be multiplexed per km2 by either deploying more base stations that each can serve a certain number of devices, using more spectrum that can be divided between the devices, or using Massive MIMO technology for spatial multiplexing by beamforming.

The preferred multiplexing solution depends on the deployment cost and various local practicalities (e.g., the shape of the propagation environment and user distribution). For example, the main purpose of the new mmWave spectrum is not to continuously deliver 5 Gbps to a single user, but to share that traffic capacity between the many users in hotspots. If each user requires 25 Mbps, then 200 users can share a 5 Gbps capacity. So far, there are few deployments of that kind since Massive MIMO in the 3.5 GHz band has been deployed in the first 5G networks to deliver multi-gigabit accumulated data rates.

I believe that spatial multiplexing will continue to be the preferred solution in future network generations, while mmWave spectrum will mainly be utilized as a WiFi replacement in hotspots with many users and high service requirements. I am skeptical towards the claims that future networks must operate at higher carrier frequencies (e.g., THz bands); we don’t need more spectrum, we need better multiplexing capabilities and that can be achieved in other ways than taking a wide bandwidth and share it between the users. In the following video, I elaborate more on these things:

IEEE Globecom Workshop on Wireless Communications for Distributed Intelligence

The 2021 IEEE GLOBECOM workshop on “Wireless Communications for Distributed Intelligence” will be held in Madrid, Spain, in December 2021. This workshop aims at investigating and re-defining the roles of wireless communications for decentralized Artificial Intelligence (AI) systems, including distributed sensing, information processing, automatic control, learning and inference.

We invite submissions of original works on the related topics, which include but are not limited to the following:

  • Network architecture and protocol design for AI-enabled 6G
  • Federated learning (FL) in wireless networks
  • Multi-agent reinforcement learning in wireless networks
  • Communication efficiency in distributed machine learning (ML)
  • Energy efficiency in distributed ML
  • Cross-layer (PHY, MAC, network layer) design for distributed ML
  • Wireless resource allocation for distributed ML
  • Signal processing for distributed ML
  • Over-the-air (OTA) computation for FL
  • Emerging PHY technologies for OTA FL
  • Privacy and security issues of distributed ML
  • Adversary-resilient distributed sensing, learning, and inference
  • Fault tolerance in distributed stochastic gradient descent (DSGD) systems
  • Fault tolerance in multi-agent systems
  • Fundamental limits of distributed ML with imperfect communication