All posts by Emil Björnson

Book Review: The 6G Manifesto

6G should be for the many, not the few” is the final sentence of the book The 6G Manifesto by William Webb (published in October 2024). He presents a vision for the evolution of wireless communication technology driven by the goal of providing everyone with sufficiently fast connectivity to support their applications everywhere and at any time. This design goal was uncommon eight years ago when Webb described it in his book, “The 5G Myth“, which I reviewed previously. However, it has become quite mainstream since then. “Uniformly great service for everyone” [Marzetta, 2015] is the main motivation behind the research into Cell-free Massive MIMO systems. Ericsson uses the term limitless connectivity for its 6G vision that “mobile networks deliver limitless connectivity for all applications, allowing anyone and anything to connect truly anywhere and at any time“. The International Telecommunication Union (ITU) has dedicated one of its official 6G usage scenarios to ubiquitous connectivity, which should “provide digital inclusion for all by meaningfully connecting the rural and remote communities, further extending into sparsely populated areas, and maintaining the consistency of user experience between different locations including deep indoor coverage.” [Recommendation ITU-R M.2160-0, 2023]

The unique and interesting contribution of the book is the proposed way to realize the vision. The author claims that we already have the necessary network equipment to provide almost ubiquitous connectivity, but it is unavailable to the users because of poor system integration. Today, each device is subscribed to one out of multiple cellular networks, connected to at most one out of many WiFi networks, and seldom capable of using direct-to-satellite connectivity. Webb’s central claim is that we would reach far toward the goal of ubiquitous connectivity if each device would always be connected to the “best” network (cellular, WiFi, satellites, etc.) among the ones that can be reached at the given location. This approach makes intuitive sense if one recalls that your phone can often present a long list of WiFi networks (to which you lack the passwords) and that only emergency calls are possible in some insufficient coverage situations (those are done through another cellular network).

Some coverage holes will remain even if each device can seamlessly connect to any available network. Webb categorizes these challenges and suggests varying solutions. Rural coverage gaps can potentially be filled by using more cell towers, high-altitude platforms, and satellite connectivity. Urban not-spots can be removed by selective deployment of small cells. Issues in public transportation can be addressed by leaky feeders in tunnels and in-train WiFi connected to cellular networks through antennas deployed outside.

Enabling Multi-Network Coordination

Letting any device connect to any available wireless network is easier said than done. There are legal, commercial, and safety issues to overcome in practice, and no concrete solutions are provided in the book. Instead, the book focuses on the technological level, particularly how to control which network a device is connected to and manage handovers between networks. Webb argues that the device cannot decide what network to use because it lacks sufficient knowledge about them. Similarly, the mobile network operator the user is subscribed to cannot decide because it has limited knowledge of other networks. Hence, the proposed solution is to create a centralized multi-network coordinator, e.g., at the national level. This entity would act as a middleman with detailed knowledge about all networks so it can control the connectivity of all devices. The QUIC transport-layer protocol is suggested to be used to minimize the interruptions in data transfer when devices are moved between networks.

There are already limited-scale attempts to do what Webb suggests. One example is the eduroam system, which runs across thousands of research and education institutes. Once a device is connected to one such WiFi network, it will automatically connect to any of them. Another example is how recent iPhone products have been able to send emergency messages over satellites without requiring a subscription. Many phones can also use WiFi and cellular links simultaneously to make the connectivity more stable but at the price of shorter battery life. A third example is the company Icomera, which sells an aggregation platform for trains that integrates cellular and satellite links from multiple network operators to provide more stable connectivity for travelers.

The practical benefits of having a multi-network coordinator are clear and convincing. However, there is a risk that the computations and overhead signaling required to operate this coordination entity at scale will be enormous; thus, further research and development are required. The book will hopefully motivate researchers, businesses, and agencies to look deeper into these issues.

Poorly Substantiated Claims

The “Wheel diagram” from ITU with the usage scenarios ubiquitous connectivity listed at the bottom.

The main issue with the book is how it promotes its interesting vision by making some poorly substantiated claims and poking fun at the mainstream 6G visions, which it calls “5G-on-steroids”. Two chapters are dedicated to presenting and discussing cherry-picked statements from white papers by manufacturers and academics to give the impression that they are advocating something entirely different from what the book does. The fact that ubiquitous connectivity is one of the six 6G goals from the ITU is overlooked (see the “wheel diagram”). It is easy to find alternative quotes from Ericsson, Huawei, Nokia, Samsung, and academia that support the author’s vision, but he chose not to include them.

The “5G-on-steriods” moniker is used to ridicule the need for more capable 6G networks, in terms of bit rates, latency, and network capacity. One part of the argument is: “As seen with 5G, there are no new applications on the horizon, and even if there were 5G is capable of handling all of them. Operators do not want 6G because they perceive it will result in expense for them for no benefit.” This is a strange claim because operators are free to choose what technology and functionalities to deploy based on their business plans. In the 5G era, they have so far mainly deployed non-standalone 5G networks to manage the increasing data demand in their networks. The extra functionalities provided by standalone 5G networks (i.e., by adding a 5G core) are meant to create new revenue. Since most people already have cellphone subscriptions, new revenue streams require creating new services and devices for businesses or people. These things take time but remain on the horizon, as I discussed in a blog post about the current 5G disappointment.

Even in a pessimistic situation where no new monetizable services arise, one would think that the operators need more capacity in their networks since the worldwide data traffic grows year after year. However, based on the observation that the traffic growth rate has decayed for the last few years, the book claims that the data traffic will plateau before the end of the decade. No convincing evidence is provided to support this claim, but the author only refers to his previous book as if it establishes it as a fact. I hypothesize that the wireless traffic growth rate will converge to the overall Internet traffic growth value (it was 17% in 2024) because wireless technology is becoming deeply integrated into the general connectivity ecosystem, and the traffic will continue to grow just as the utilization of most other resources on Earth. To be fair, these are just two different speculations/predictions of the future, so we must wait and see what happens. The main issue is that the book uses the zero-traffic-growth assumption as the main argument for why the only problem that remains to be solved in the telecom world is global coverage, which is a shaky ground to build on.

Another peculiar claim in the book is that the 5G air interface was “anything but new” because it remained built on OFDM. This is ignorant of the fact that non-standalone 5G is all about exploiting Massive MIMO (an air interface breakthrough) to enable higher bitrates and cell capacity through spatial multiplexing, in addition to the extra bandwidth. This overlook becomes particularly clear when the book discusses how 6G might reach 10 times higher data rates than 5G. It is argued that 10 times more bandwidth is needed, which can only be found at much higher carrier frequencies where many more base stations are needed so that the deployment cost will grow rapidly. This chain of arguments is challenged by the fact that one can alternatively achieve 10x by using 2.5 times more spectrum and 4 times more MIMO layers, which is a plausible situation in the upper mid-band without the need for major densification.

My Verdict

The 6G Manifesto presents a compelling vision for reaching globally ubiquitous network coverage by first letting any devices connect to any wireless network already in place. A centralized multi-network coordinator must be created to govern such a system. The remaining coverage holes could be filled by investing in new infrastructure that covers only the missing pieces. It is worth investigating whether it is a scalable solution with reasonable operational costs and if one can build a meaningful legal and commercial framework around it. However, when reading the book, one must keep in mind that the descriptions of the current situation, the prospects of creating new services that generate more traffic and revenue, and the mainstream 6G visions are shaky and adapted to fit the book’s narrative.

Episode 43: Ten Things That Are Missing in Many Textbooks

We have now released the 43rd episode of the podcast Wireless Future. It has the following abstract:

There are many textbooks to choose between when learning the basics of wireless communications. In this episode, Erik G. Larsson and Emil Björnson discuss the recent book “Introduction to Multiple Antenna Communications and Reconfigurable Surfaces” that Emil has written together with Özlem Tugfe Demir. The conversation focuses on ten subtopics that are covered by the book and differentiates it from many previous textbooks. These are related to the dimensionality of physical constants, the choice of performance metrics, and the motivation behind OFDM signaling. Various system modeling characteristics are discussed, including how the antenna array geometry impacts the channel, dual-polarized signals, carrier frequency dependencies, and the connection between models for small-scale fading and radar cross-sections. The role of non-orthogonal multiple access, hybrid beamforming, and reconfigurable intelligent surfaces are also covered. The textbook is meant for teaching an introductory course on the topic and can be freely downloaded from https://www.nowpublishers.com/NowOpen

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

Can Explainable AI Make Wireless Networks Better?

The growing emphasis on “explainable AI” in recent years highlights a fundamental issue: many previous AI algorithms have operated as black boxes, with little understanding of what information is extracted and utilized from the training data. While this opaqueness may be tolerated in fields like natural language processing or computer vision, where traditional algorithms have struggled, applying unexplainable AI to an established field like wireless communications is both unnecessary and counterproductive.

In wireless communications, decades of rigorous research and experience from real-world network operation have produced well-functioning, human-crafted algorithms based on established models (e.g., for wave propagation and randomness) and optimized methodologies (e.g., from information theory). If AI is to surpass these state-of-the-art solutions, we must understand why: Is it uncovering previously overlooked characteristics in real-world data, or is it merely exploiting artifacts in a synthetic dataset? The latter is a significant risk that the research community must be mindful of, particularly when training data is generated from simplified numerical models that don’t fully capture real-world complexities but only resemble measured data in some statistical sense.

I have identified three characteristics that are particularly promising to learn from data to improve the operation of the physical and MAC layers in future wireless networks.

  1. User mobility: People typically move through the coverage area of a wireless network in a structured manner, but it can be hard to track and predict the mobility using traditional signal processing methods, except in line-of-sight scenarios. AI algorithms can learn complex maps (e.g., channel charts) and use them for predictive tasks such as beam tracking, proactive handover, and rate adaptation.
  2. User behaviors: People are predictable when it comes to when, how, and where they use particular user applications, as well as what content they are looking for. An AI algorithm can learn such things and utilize them to enhance the user experience through proactive caching or to save energy by turning off hardware components in low-traffic situations.
  3. Application behaviors: The data that must be communicated wirelessly to run an application is generally bursty, even if the application is used continuously by the user. The corresponding traffic pattern can be learned by AI and used for proactive scheduling and other resource allocation tasks. Auto-encoders can also be utilized for data compression, an instance of semantic communication.

A cellular network that utilizes these characteristics will likely implement different AI algorithms in each cell because the performance benefits come from tuning parameters based on the local conditions. AI can also be used at the network operation level for automation and to identify anomalies.

Many of these features mentioned above already exist in 5G networks but have been added on top of the standard at the vendor’s discretion. The vision for 6G as an “AI-native” network is to provide a standardized framework for data collection, sharing, and utilization across the radio access network. This could enable AI-driven optimizations at a scale previously unattainable, unlocking the full potential of AI in wireless communications. When this happens, we must not forget about explainability: there must at least be a high-level understanding of what characteristics are learned from the data and why they can be utilized to make the network more efficient.

I give some further examples of AI opportunities in wireless networks, as well as associated risks and challenges, in the following video:

Episode 42: The Contours of 6G are Taking Shape

We have now released the 42nd episode of the podcast Wireless Future. It has the following abstract:

Even if the 6G standardization is just beginning, the last five years of intensive research have illuminated the contours of the next-generation technology. In this episode, Emil Björnson and Erik G. Larsson discuss the recent paper “6G takes shape” written by leading researchers at UT Austin and Qualcomm. The conversation covers lessons learned from 5G, the potential role of new frequency bands and waveforms, and new coding schemes and forms of MIMO. The roles of machine learning and generative AI, as well as satellite integration and Open RAN, are also discussed. The original paper by Jeffrey G. Andrews, Todd E. Humphreys, and Tingfang Ji will appear in the IEEE BITS magazine, and the preprint is available on arXiv.

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

The Confusion that Creates Disappointment with 5G

Five years into the 5G era, there is 5G coverage in many places, but the networks have yet to fulfill their initial promises. Many consumers and businesses are disappointed since they anticipated revolutionary changes: a world with ubiquitous ultra-fast connectivity, autonomous connected vehicles, and a digitalized society driven by many Internet-of-things sensors. While such use cases remain on the roadmap of possibilities, the telecom industry’s 5G marketing gave the impression that they would be instantly available in the 5G era. The core of the problem is that no clear distinction was made between 5G networks and 5G services. I will elaborate on these issues and how to avoid a similar disappointment with 6G.

The four stages of development

The development and lifespan of a cellular technology generation can be roughly divided into four stages:

The first stage defines the technical performance requirements in categories such as data rates, reliability, latency, and number of connected devices. These were called IMT-2020 in the case of 5G and were first published in 2015. The performance numbers were selected by studying prospective use cases and their potential connectivity needs. For example, a collection of European vendors, operators, verticals, and universities studied 12 test cases within the METIS project.

The second stage consists of developing a standard that is theoretically capable of reaching the IMT-2020 requirements, as well as developing standard-compliant practical implementations, including hardware and software. The first release of the 5G standard was finalized in 2018, and field trials were conducted both before and afterward.

The third stage entails deploying commercial networks. The first 5G-compliant networks were opened for customers in 2019, and the coverage has since increased from a few city centers per country to the majority of the population in many countries. The service offerings have focused on mobile broadband and fixed wireless access—the same services as in 4G, but with faster speed per device, more data per month in subscriptions, and fewer congestion issues.

The 5G network infrastructure also provides the foundation for companies and industries to design, test, and launch radically new connectivity services, particularly building on ultra-reliable low-latency communications (URLLC) and massive machine-type communication (mMTC). These are two 5G capabilities that did not exist in 4G but were added in the hope that they would lead to new services and revenue streams in the 5G era. Since every person already has a smartphone and is reluctant to pay more for their subscription, increased revenue is tightly connected to creating new service categories and connecting new devices. The service evolution is the fourth and final stage in the development, and there is a feedback loop to Stages 2 and 3 since the standard and network also evolve within each generation.

In a nutshell, 5G networks are the infrastructure that enables connectivity, while 5G services are new monetizable use cases of the networks. The current 5G disappointment was created because the public could not distinguish these things but expected to get the previously advertised 5G services when the 5G networks were launched. This happened even if 5G networks have been successful in delivering on the promised technical capabilities, such has higher data rates and capacity.

Deploying a cellular network is only the beginning. The network will exist for decades, and new services can gradually be developed.

Prospective versus real services

The prospective use cases identified in Stage 1 of the 5G development were nothing but a brainstorming list created by telecom researchers tasked with predicting what might be required ten years into the future. Unfortunately, the telecom industry began advertising them heavily to the public, giving the impression that these were the real services that 5G would support in Stage 4. Now, when 5G networks have existed for five years without any of these 5G-branded services, it is unsurprising that people and businesses have been disenchanted.

5G networks will hopefully enable some new “5G services” that can bring new revenue streams to the financially stressed mobile network operators. However, designing such services takes time and effort because one must create an end-to-end ecosystem with the required hardware, software, and business models. For example, wireless automation of a factory requires collaboration between the factory owner, manufacturers of factory equipment, vendors for both network infrastructure and devices, and the intended network operators.

The development of new 5G services was initially delayed because most telecom operators deployed non-standalone 5G networks, consisting of 5G base stations anchored to a legacy 4G core network infrastructure. New capabilities such as URLLC and mMTC cannot be used without a 5G core, which created a chicken-and-egg situation: New services could not be created without a 5G core, and the cost of deploying a 5G core could not be motivated since no new services were ready for launch.

Fortunately, the core networks have now begun to be upgraded, so testing of new use cases and services is possible. Since 2024, I have been an advisor for the Advanced Digitalisation program in Sweden, which provides co-funding for companies that collaborate in developing 5G services that fit their businesses. There is much enthusiasm for these activities but also an awareness that there are many missing pieces in the 5G service puzzles to be found.

Lessons learned for 6G

Based on these experiences, I get worried when I see 6G services already being discussed in publications, conferences, and press releases. The 6G development has just entered Stage 2. The IMT-2030 requirements are taking form, and during the next five years, the 6G network technology will be standardized and developed to reach those goals. We can expect 6G network deployments to begin around 2030 and initially provide better performance and efficiency for already existing services. It is not until 2035 that entirely new services might take off and hopefully create new revenue streams for the network operators. These services might require new functionalities provided by 6G, such as edge intelligence, radar sensing, and ubiquitous coverage. It could also be related to augmented reality glasses for consumers or digitalization/automation of businesses or society—we can only speculate at this point.

Right now, the essential thing is to develop a 6G technology that can provide even greater communication, localization, and sensing performance. The future will tell which innovative services can be built on top of the 6G network infrastructure, when they will be available, and how to monetize them. We must remember that revolutionary changes happen slowly, are often only observable retrospectively, and usually differ greatly from the initial visions.

If the telecom industry describes futuristic use cases in its initial 6G marketing, it will likely create a 6G dissatisfaction that resembles the current 5G disappointment. Hence, I urge the telecom industry to work in parallel with developing new 5G services that can be launched in the coming years and 6G network technology that can be rolled out in the next decade but focus the marketing on the 5G services. It is first when 6G networks reach the trial phase that the development of 6G services can truly begin, so there is something concrete to spread to the public.

There are many prospective use cases for 6G networks, including mobile augmented reality. However, it will take a long time before any futuristic service materializes.

Episode 41: 6G in the Upper Mid-Band

We have now released the 41st episode of the podcast Wireless Future. It has the following abstract:

New cellular network deployments are often associated with new frequency bands. 6G will likely use the upper mid-band from 7-24 GHz. It is called the “golden band” since it provides more spectrum than in current 5G networks and features decent propagation conditions. In this episode, Erik G. Larsson and Emil Björnson discuss the coexistence issues that must be overcome when operating in this band and how much spectrum we can expect to utilize. The future role of multi-antenna technology and its associated challenges are detailed, including the emerging “Gigantic MIMO” term. The prospects of exploiting near-field propagation effects in 6G and the road towards distributed cell-free MIMO are also covered. You can read Emil’s paper about Gigantic MIMO and Nokia’s white paper about coverage evaluation.

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

Open RAN: Success or Failure?

The 3GPP standard for mobile communications only specifies the basic functionalities needed to make a device from any vendor (e.g., Apple, Samsung, Xiaomi) communicate with network infrastructure from any vendor (e.g., Ericsson, Nokia, Huawei). The exact implementation details (e.g., hardware and algorithms) can be selected freely—a setup selected to promote innovation and competition between vendors. Since the internal interfaces (blue cables in the picture) between radio units, lower-layer processor units, and higher-layer processor units are unspecified, a telecom operator must buy its entire radio access network (RAN) from a single vendor.

Single-vendor networks have several inherent benefits: jointly optimized hardware and software, ease of deployment, and clarity of who to blame when something fails. However, it also leads to vendor lock-in, where the operator has little bargaining power when buying new equipment and cannot benefit from new innovations made by other vendors.

Motivation for Open RAN

Today, there are only a few vendors capable of delivering a complete network, and recent geopolitical developments have reduced the viable options in each country even further. To create more options and market competition, the O-RAN Alliance has since 2018 worked to define what components a network must contain (e.g., radio unit, distributed unit, centralized unit) and specify open interfaces between them (e.g., fronthaul, midhaul, and APIs). This is called the Open RAN specification, and it is meant to enable new companies to enter the market and specialize in offering a limited number of components. If successful, this will allow operators to mix-and-match hardware and software from multiple vendors in the same network.

There has been much hype about Open RAN and the innovations it might bring over the past few years. Several people have even tried to convince me that future networks cannot work without it. Nevertheless, six years into the Open RAN era, there are barely any multi-vendor networks, and many people were surprised when AT&T decided to build a new nationwide Open RAN network that only consists of $14 billion of Ericsson equipment. This has been called “Single-vendor Open RAN” and has led to claims from independent experts that Open RAN is failing miserably [see: Light Reading, May 2024; telecoms.com, Aug 2024].

Success criteria for Open RAN

I think the Open RAN development mostly goes as intended, even if promises made in various press releases are yet to be fulfilled. The RAN technology is highly complex, so one cannot expect new vendors to be able to compete with the traditional giants immediately. A mix-and-match network using hardware and software from new vendors will likely not provide competitive performance any time soon. This is reminiscent of how Apple has tried for years to build high-performing 5G baseband chips for their phones but is still far behind Qualcomm.

The potential success of Open RAN should not be measured by the number of new multi-vendor networks being deployed in the next few years but by how the increased competition and intelligent network virtualization lead to cost reductions for the operators—both when deploying new networks and later when network components must be replaced due to hardware failures or upgraded with new functionalities. It will naturally take time for these potential benefits to materialize because most operators have already deployed traditional 4G and 5G networks and will not invest in new ”greenfield” deployments anytime soon. Perhaps the first large-scale multi-vendor deployments with hardware from untraditional vendors will take place in geographical regions currently lacking 5G connectivity and where cost savings are more important than top-of-the-line performance. In many other places, I believe Open RAN will not be a commercial reality until 6G networks are rolled out.

The O-RAN Alliance identifies four benefits of Open RAN: openness, interoperability, virtualization, and AI automation. Single-vendor Open RAN compliant networks will use the latter two benefits from day one to reduce hardware/operational costs and enable new services. The operators might also benefit from the former two in the long run, particularly for components that become commodified. Virtualization and AI automation are, of course, features that a state-of-the-art closed RAN deployment also supports—they are not unique features for Open RAN but have been researched under the “Cloud RAN” name for a long period. However, AT&T has demonstrated that there is little incentive to build new networks in the traditional way when one can get openness and interoperability as well.

In conclusion, Open RAN is successful in the sense of being a natural feature in any new network deployment. However, the hyped interface openness and multi-vendor support are not the transformative aspects, and we cannot expect a major uptake until greenfield 6G deployment commences.