Category Archives: Beyond 5G

Episode 28: Ultra-Reliable Low-Latency Communication (With Petar Popovski)

We have now released the 28th episode of the podcast Wireless Future. It has the following abstract:

The reliability of an application is determined by its weakest link, which often is the wireless link. Channel coding and retransmissions are traditionally used to enhance reliability but at the cost of extra latency. 5G promises to enhance both reliability and latency in a new operational mode called ultra-reliable low-latency communication (URLLC). In this episode, Erik G. Larsson and Emil Björnson discuss URLLC with Petar Popovski, Professor at Aalborg University, Denmark. The conversation pinpoints the physical reasons for latency and unreliability, and viable solutions related to network deployment, diversity, digital vs. analog communications, non-orthogonal network slicing, and machine learning. Further details can be found in the article “Wireless Access in Ultra-Reliable Low-Latency Communication (URLLC)” and its companion video

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

Episode 24: Q&A With 5G and 6G Predictions

We have now released the 24nd episode of the podcast Wireless Future, which is a New Year’s special! It has the following abstract:

In this episode, Emil Björnson and Erik G. Larsson answer ten questions from the listeners. The common theme is predictions of how 5G will evolve and which technologies will be important in 6G. The specific questions: Will Moore’s law or Edholm’s law break down first? How important will integrated communication and sensing become? When will private 5G networks start to appear? Will reconfigurable intelligent surfaces be a key enabler of 6G? How can we manage the computational complexity in large-aperture Massive MIMO? Will machine learning be the game-changer in 6G? What is 5G Dynamic Spectrum Sharing? What does the convergence of the Shannon and Maxwell theories imply? What happened to device-to-device communications, is it an upcoming 5G feature? Will full-duplex radios be adopted in the future? If you have a question or idea for a future topic, please share it as a comment to the YouTube version of this episode.

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

Episode 23: Wireless Localization and Sensing (With Henk Wymeersch)

We have now released the 23nd episode of the podcast Wireless Future! It has the following abstract:

For each wireless generation, we are using more bandwidth and more antennas. While the primary reason is to increase the communication capacity, it also increases the network’s ability to localize objects and sense changes in the wireless environment. The localization and sensing applications impose entirely different requirements on the desired signal and channel properties than communications. To learn more about this, Emil Björnson and Erik G. Larsson have invited Henk Wymeersch, Professor at Chalmers University of Technology, Sweden. The conversation covers the fundamentals of wireless localization, the historical evolution, and future developments that might involve machine learning, terahertz bands, and reconfigurable intelligent surfaces. Further details can be found in the articles “Collaborative sensor network localization” and “Integration of communication and sensing in 6G”.

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

Episode 22: Being Near or Far in Wireless

We have now released the 22nd episode of the podcast Wireless Future! It has the following abstract:

Wireless signals look different when observed near to versus far from the transmitter. The notions of near and far also depend on the physical size of the transmitter and receiver, as well as on the wavelength. In this episode, Erik G. Larsson and Emil Björnson discuss these fundamental phenomena and how they can be utilized when designing future communication systems. Concept such as near-field communications, finite-depth beamforming, mutual coupling, and new spatial multiplexing methods such as orbital angular momentum (OAM) are covered. To get more technical details, you can read the paper “A Primer on Near-Field Beamforming for Arrays and Reconfigurable Intelligent Surfaces”.

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

Episode 21: Wireless Coverage Without Beamforming

We have now released the 21st episode of the podcast Wireless Future! It has the following abstract:

The latest wireless technologies rely heavily on beamformed data transmissions, implemented using antenna arrays. Since the signals are spatially directed towards the location of the receiver, the transmitter needs to know where to point the beam. Before the wireless link has been established, the transmitter will not have such knowledge. Hence, the geographical coverage of a network is determined by how we can transmit in the absence of beamforming gains. In this episode, Emil Björnson and Erik G. Larsson discuss how to achieve wide-area coverage in wireless networks without beamforming. The conversation covers deployment fundamentals, pathloss characteristics, beam sweeping, spatial diversity, and space-time codes. To learn more, you can read the textbook “Space-Time Block Coding for Wireless Communications”.

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

Episode 20: Wireless Solutions for the Internet of Things (With Liesbet Van der Perre)

We have now released the twentieth episode of the podcast Wireless Future! It has the following abstract:

Many objects around us are embedded with sensors and processors to create the Internet of Things (IoT). Wireless connectivity is an essential component for enabling these devices to exchange data without human interaction. To learn more about this development, Erik G. Larsson and Emil Björnson have invited Liesbet Van der Perre, Professor at KU Leuven, Belgium. The conversation covers IoT applications, connectivity solutions, powering, security, sustainability, and e-waste. Further details can be found in the article “The Art of Designing Remote IoT Devices—Technologies and Strategies for a Long Battery Life”.

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

Do We Need More Bandwidth?

The bit rate (bit/s) is so tightly connected with the bandwidth (Hz) that the computer science community uses these words interchangeably. This makes good sense when considering fixed communication channels (e.g., cables) for which these quantities are proportional with the bandwidth efficiency (bit/s/Hz) being the proportionality constant. However, when dealing with time-varying wireless channels, the spectral efficiency can vary by orders-of-magnitude depending on the propagation conditions (e.g., between cell center and cell edge), which weakens the connection between the rate and bandwidth.

The peak rate of a 4G device can reach 1 Gbit/s and 5G devices are expected to reach 20 Gbit/s. These numbers greatly surpass the need both for the most demanding contemporary use cases, such as the 25 Mbit/s required by 4k video streaming, and for envisioned virtual reality applications that might require a few hundred Mbit/s. One can certainly imagine other futuristic applications that are more demanding, but since there is a limit to how much information the human perception system can process in real-time, these are typically “data shower” situations where a huge dataset must be momentarily transferred to/from a device for later utilization or processing. I think it is fair to say that future networks cannot be built primarily for such niche applications, thus I made the following one-minute video claiming that wireless doesn’t need more bandwidth but higher efficiency, so that we can deliver bit rates close to the current peak rates most of the time instead of under ideal circumstances.

Why are people talking about THz communications?

The spectral bandwidth has increased with every wireless generation so naturally, the same thing will happen in 6G. This is the kind of argument that you might hear from proponents of (sub-)THz communications, which is synonymous with operating at carrier frequencies beyond 100 GHz where huge bandwidths are available for utilization. The main weakness with this argument is that increasing the bandwidth has never been the main goal of wireless development but only a convenient way to increase the data rate.

As the wireless data traffic continues to increase, the main contributing factor will not be that our devices require much higher instantaneous rates when they are active, but that more devices are active more often. Hence, I believe the most important performance metric is the maximum traffic capacity measured in bit/s/km2, which describes the accumulated traffic that the active devices can generate in a given area.

The traffic capacity is determined by three main factors:

  1. The number of spatially multiplexed devices:
  2. The bandwidth efficiency per device; and
  3. The bandwidth.

We can certainly improve this metric by using more bandwidth, but it is not the only way and it mainly helps users that have good channel conditions. The question that researchers need to ask is: What is the preferred way to increase the traffic capacity from a technical, economical, and practical perspective?

I don’t think we have a conclusive answer to this yet, but it is important to remember that even if the laws of nature stay constant, the preferred solution can change with time. A concrete example is the development of processors, for which the main computing performance metric is the floating-point operations per second (FLOPS). Improving this metric used to be synonymous with increasing the clock speed, but this trend has now been replaced with increasing the number of cores and using parallel threads because it leads to more power- and heat-efficient solutions than increasing the clock speed beyond the current range.

The corresponding development in wireless communications would be to stop increasing the bandwidth (which determines the sampling rate of the signals and the clock speed needed for processing) and instead focus on multiplexing many data streams, which take the role of the threads in this analogy, and balancing the bandwidth efficiency between the streams. The following video describes my thoughts on how to develop wireless technology in that direction:

As a final note, as the traffic capacity in wireless networks increase, there will be some point-to-point links that require huge capacity. This is particularly the case between an access point and the core network. These links will eventually require cables or wireless technologies that can handle many Tbit/s and the wireless option will then require THz communications. The points that I make above apply to the wireless links at the edge, between devices and access points, not to the backhaul infrastructure.