Category Archives: 5G

Episode 43: Ten Things That Are Missing in Many Textbooks

We have now released the 43rd episode of the podcast Wireless Future. It has the following abstract:

There are many textbooks to choose between when learning the basics of wireless communications. In this episode, Erik G. Larsson and Emil Björnson discuss the recent book “Introduction to Multiple Antenna Communications and Reconfigurable Surfaces” that Emil has written together with Özlem Tugfe Demir. The conversation focuses on ten subtopics that are covered by the book and differentiates it from many previous textbooks. These are related to the dimensionality of physical constants, the choice of performance metrics, and the motivation behind OFDM signaling. Various system modeling characteristics are discussed, including how the antenna array geometry impacts the channel, dual-polarized signals, carrier frequency dependencies, and the connection between models for small-scale fading and radar cross-sections. The role of non-orthogonal multiple access, hybrid beamforming, and reconfigurable intelligent surfaces are also covered. The textbook is meant for teaching an introductory course on the topic and can be freely downloaded from https://www.nowpublishers.com/NowOpen

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

Can Explainable AI Make Wireless Networks Better?

The growing emphasis on “explainable AI” in recent years highlights a fundamental issue: many previous AI algorithms have operated as black boxes, with little understanding of what information is extracted and utilized from the training data. While this opaqueness may be tolerated in fields like natural language processing or computer vision, where traditional algorithms have struggled, applying unexplainable AI to an established field like wireless communications is both unnecessary and counterproductive.

In wireless communications, decades of rigorous research and experience from real-world network operation have produced well-functioning, human-crafted algorithms based on established models (e.g., for wave propagation and randomness) and optimized methodologies (e.g., from information theory). If AI is to surpass these state-of-the-art solutions, we must understand why: Is it uncovering previously overlooked characteristics in real-world data, or is it merely exploiting artifacts in a synthetic dataset? The latter is a significant risk that the research community must be mindful of, particularly when training data is generated from simplified numerical models that don’t fully capture real-world complexities but only resemble measured data in some statistical sense.

I have identified three characteristics that are particularly promising to learn from data to improve the operation of the physical and MAC layers in future wireless networks.

  1. User mobility: People typically move through the coverage area of a wireless network in a structured manner, but it can be hard to track and predict the mobility using traditional signal processing methods, except in line-of-sight scenarios. AI algorithms can learn complex maps (e.g., channel charts) and use them for predictive tasks such as beam tracking, proactive handover, and rate adaptation.
  2. User behaviors: People are predictable when it comes to when, how, and where they use particular user applications, as well as what content they are looking for. An AI algorithm can learn such things and utilize them to enhance the user experience through proactive caching or to save energy by turning off hardware components in low-traffic situations.
  3. Application behaviors: The data that must be communicated wirelessly to run an application is generally bursty, even if the application is used continuously by the user. The corresponding traffic pattern can be learned by AI and used for proactive scheduling and other resource allocation tasks. Auto-encoders can also be utilized for data compression, an instance of semantic communication.

A cellular network that utilizes these characteristics will likely implement different AI algorithms in each cell because the performance benefits come from tuning parameters based on the local conditions. AI can also be used at the network operation level for automation and to identify anomalies.

Many of these features mentioned above already exist in 5G networks but have been added on top of the standard at the vendor’s discretion. The vision for 6G as an “AI-native” network is to provide a standardized framework for data collection, sharing, and utilization across the radio access network. This could enable AI-driven optimizations at a scale previously unattainable, unlocking the full potential of AI in wireless communications. When this happens, we must not forget about explainability: there must at least be a high-level understanding of what characteristics are learned from the data and why they can be utilized to make the network more efficient.

I give some further examples of AI opportunities in wireless networks, as well as associated risks and challenges, in the following video:

Episode 42: The Contours of 6G are Taking Shape

We have now released the 42nd episode of the podcast Wireless Future. It has the following abstract:

Even if the 6G standardization is just beginning, the last five years of intensive research have illuminated the contours of the next-generation technology. In this episode, Emil Björnson and Erik G. Larsson discuss the recent paper “6G takes shape” written by leading researchers at UT Austin and Qualcomm. The conversation covers lessons learned from 5G, the potential role of new frequency bands and waveforms, and new coding schemes and forms of MIMO. The roles of machine learning and generative AI, as well as satellite integration and Open RAN, are also discussed. The original paper by Jeffrey G. Andrews, Todd E. Humphreys, and Tingfang Ji will appear in the IEEE BITS magazine, and the preprint is available on arXiv.

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

The Confusion that Creates Disappointment with 5G

Five years into the 5G era, there is 5G coverage in many places, but the networks have yet to fulfill their initial promises. Many consumers and businesses are disappointed since they anticipated revolutionary changes: a world with ubiquitous ultra-fast connectivity, autonomous connected vehicles, and a digitalized society driven by many Internet-of-things sensors. While such use cases remain on the roadmap of possibilities, the telecom industry’s 5G marketing gave the impression that they would be instantly available in the 5G era. The core of the problem is that no clear distinction was made between 5G networks and 5G services. I will elaborate on these issues and how to avoid a similar disappointment with 6G.

The four stages of development

The development and lifespan of a cellular technology generation can be roughly divided into four stages:

The first stage defines the technical performance requirements in categories such as data rates, reliability, latency, and number of connected devices. These were called IMT-2020 in the case of 5G and were first published in 2015. The performance numbers were selected by studying prospective use cases and their potential connectivity needs. For example, a collection of European vendors, operators, verticals, and universities studied 12 test cases within the METIS project.

The second stage consists of developing a standard that is theoretically capable of reaching the IMT-2020 requirements, as well as developing standard-compliant practical implementations, including hardware and software. The first release of the 5G standard was finalized in 2018, and field trials were conducted both before and afterward.

The third stage entails deploying commercial networks. The first 5G-compliant networks were opened for customers in 2019, and the coverage has since increased from a few city centers per country to the majority of the population in many countries. The service offerings have focused on mobile broadband and fixed wireless access—the same services as in 4G, but with faster speed per device, more data per month in subscriptions, and fewer congestion issues.

The 5G network infrastructure also provides the foundation for companies and industries to design, test, and launch radically new connectivity services, particularly building on ultra-reliable low-latency communications (URLLC) and massive machine-type communication (mMTC). These are two 5G capabilities that did not exist in 4G but were added in the hope that they would lead to new services and revenue streams in the 5G era. Since every person already has a smartphone and is reluctant to pay more for their subscription, increased revenue is tightly connected to creating new service categories and connecting new devices. The service evolution is the fourth and final stage in the development, and there is a feedback loop to Stages 2 and 3 since the standard and network also evolve within each generation.

In a nutshell, 5G networks are the infrastructure that enables connectivity, while 5G services are new monetizable use cases of the networks. The current 5G disappointment was created because the public could not distinguish these things but expected to get the previously advertised 5G services when the 5G networks were launched. This happened even if 5G networks have been successful in delivering on the promised technical capabilities, such has higher data rates and capacity.

Deploying a cellular network is only the beginning. The network will exist for decades, and new services can gradually be developed.

Prospective versus real services

The prospective use cases identified in Stage 1 of the 5G development were nothing but a brainstorming list created by telecom researchers tasked with predicting what might be required ten years into the future. Unfortunately, the telecom industry began advertising them heavily to the public, giving the impression that these were the real services that 5G would support in Stage 4. Now, when 5G networks have existed for five years without any of these 5G-branded services, it is unsurprising that people and businesses have been disenchanted.

5G networks will hopefully enable some new “5G services” that can bring new revenue streams to the financially stressed mobile network operators. However, designing such services takes time and effort because one must create an end-to-end ecosystem with the required hardware, software, and business models. For example, wireless automation of a factory requires collaboration between the factory owner, manufacturers of factory equipment, vendors for both network infrastructure and devices, and the intended network operators.

The development of new 5G services was initially delayed because most telecom operators deployed non-standalone 5G networks, consisting of 5G base stations anchored to a legacy 4G core network infrastructure. New capabilities such as URLLC and mMTC cannot be used without a 5G core, which created a chicken-and-egg situation: New services could not be created without a 5G core, and the cost of deploying a 5G core could not be motivated since no new services were ready for launch.

Fortunately, the core networks have now begun to be upgraded, so testing of new use cases and services is possible. Since 2024, I have been an advisor for the Advanced Digitalisation program in Sweden, which provides co-funding for companies that collaborate in developing 5G services that fit their businesses. There is much enthusiasm for these activities but also an awareness that there are many missing pieces in the 5G service puzzles to be found.

Lessons learned for 6G

Based on these experiences, I get worried when I see 6G services already being discussed in publications, conferences, and press releases. The 6G development has just entered Stage 2. The IMT-2030 requirements are taking form, and during the next five years, the 6G network technology will be standardized and developed to reach those goals. We can expect 6G network deployments to begin around 2030 and initially provide better performance and efficiency for already existing services. It is not until 2035 that entirely new services might take off and hopefully create new revenue streams for the network operators. These services might require new functionalities provided by 6G, such as edge intelligence, radar sensing, and ubiquitous coverage. It could also be related to augmented reality glasses for consumers or digitalization/automation of businesses or society—we can only speculate at this point.

Right now, the essential thing is to develop a 6G technology that can provide even greater communication, localization, and sensing performance. The future will tell which innovative services can be built on top of the 6G network infrastructure, when they will be available, and how to monetize them. We must remember that revolutionary changes happen slowly, are often only observable retrospectively, and usually differ greatly from the initial visions.

If the telecom industry describes futuristic use cases in its initial 6G marketing, it will likely create a 6G dissatisfaction that resembles the current 5G disappointment. Hence, I urge the telecom industry to work in parallel with developing new 5G services that can be launched in the coming years and 6G network technology that can be rolled out in the next decade but focus the marketing on the 5G services. It is first when 6G networks reach the trial phase that the development of 6G services can truly begin, so there is something concrete to spread to the public.

There are many prospective use cases for 6G networks, including mobile augmented reality. However, it will take a long time before any futuristic service materializes.

Rethinking Wireless Repeaters

In what ways could we improve cellular-massive-MIMO based 5G? Well, to start with, this technology is already pretty good. But coverage holes, and difficulties to send multiple streams to multi-antenna users because of insufficient channel rank, remain issues.

Perhaps the ultimate solution is distributed MIMO (also known as cell-free massive MIMO). But while this is at heart a powerful technology, installing backhaul seems dreadfully expensive, and achieving accurate phase-alignment for coherent multiuser beamforming on downlink is a difficult technical problem. Another option is RIS – but they have large form factors and require a lot of training and control overhead, and probably, in practice, some form of active filtering to make them sufficiently band-selective. 

A different, radically new approach is to deploy large numbers of physically small and cheap wireless repeaters, that receive and instantaneously retransmit signals – appearing as if they were ordinary scatterers in the channel, but with amplification. Repeaters, as such, are deployed today already but only in niche use cases. Could they be deployed at scale, in swarms, within the cells? What would be required of the repeaters, and how well could a repeater-assisted cellular massive MIMO system work, compared to distributed MIMO? What are the fundamental limits of this technology? 

At last, some significant new research directions for the wireless PHY community!

Paper: https://arxiv.org/pdf/2406.00142

Limited-Time Offer: New MIMO book for $50

If you want to develop a strong foundational knowledge of MIMO technology, I recommend you to read our new book Introduction to Multiple Antenna Communications and Reconfigurable Surfaces.

The PDF is available for free from the publisher’s website, and you can download the simulation code and answers to the exercises from GitHub.

I am amazed at how many people have already downloaded the PDF. However, books should ideally be read in physical format, so we have arranged a special offer for you. Until May 15, you can also buy color-printed copies of the book for only $50 (the list price is $145). To get that price, click on “Buy Book” at the publisher’s website, enter the discount code 919110, and unselect “Add Track & Trace Shipping” (the courier service costs extra).

Here is a video where I explain why we wrote the book and who it is for:

How Many Beams Can You Send from a MIMO Array?

I receive many questions from students and researchers on social media, including this blog, YouTube, and ResearchGate. I do my best to answer such questions while commuting to work or having few minutes between meetings. I receive some questions quite frequently, making it worth creating videos where I try to answer them once and for all.

Below, you can find the first video in that series, and it answers the question: How many beams can you send from a MIMO array? As you will notice when watching the video, we obtain a more appropriate question if “can you send” is replaced by “do you want to send”.

If you have remaining doubts or comments after watching it, please feel free to post a comment on YouTube.