The Wireless Future podcast is back with a new season, and we have released the 46th episode. It has the following abstract:
One of the first topics covered in this podcast was reconfigurable intelligent surfaces (RIS). Five years later, Erik G. Larsson and Emil Björnson return to this topic to reflect on what has happened since then. The conversation covers how these surfaces can improve wave propagation between transmitters and receivers, and identifies the most convincing practical use cases. Core challenges overcome in recent years are discussed, and Emil describes the RIS used in his lab and the lessons learned from his measurements. They also go through new forms of RIS, known as Beyond-Diagonal RIS, STAR-RIS, and Stacked Intelligent Metasurfaces. To learn more, you can read the paper “Reconfigurable Intelligent Surfaces in Upper Mid-Band 6G Networks: Gain or Pain?”.
You can watch the video podcast on YouTube:
You can listen to the audio-only podcast at the following places:
The hype around reconfigurable intelligent surfaces (RIS) has escalated over the past five years. It began with communication theoretic studies based on “guesstimated” models, but gradually became more grounded through experimental validations in sub-6 GHz bands and the formation of an ETSI Industry Specification Group, which has laid the foundation for future standardization of RIS technology.
The outcomes of these efforts are mixed. On the one hand, it is clear that one can build and operate RIS roughly as envisioned. Several universities and companies have built functional RIS prototypes, and there are efficient channel estimation and beamforming algorithms for scenarios where the goal is to create a virtual line-of-sight (LOS) path around a block object (as illustrated in the figure).
On the other hand, it has become clear that practical hardware design is associated with signal losses that are often unaccounted for in theoretical studies, but limit the practical usefulness of RIS. Many actors in the telecom industry remain convinced that alternative solutions (e.g., network-controlled repeaters and small-cell base stations) are more attractive to deploy than RIS. This is aligned with my five-year-old assessment: “RIS is a hammer looking for a nail“. Since we have not yet found a practical problem that RIS can solve significantly better than other technologies, despite intensive research and pre-standardization work, I believe the RIS technology has passed the Peak of Inflated Expectations on the Gartner hype cycle curve.
I currently see the largest potential for RIS deployments in mmWave bands. This assessment is based on three observations:
The propagation losses through objects grow with the carrier frequency, making it more attractive to send signals around objects than through them at mmWave frequencies compared to sub-6 GHz bands. The coverage of mmWave base stations is typically limited to LOS, e.g., a room or a street segment. Hence, a RIS could extend that coverage to a neighboring room or crossing street by creating a virtual LOS path.
A RIS of a given physical size provides a larger gain at higher frequencies. The RIS collects incident signal energy proportionally to its size, so that part is frequency-independent. However, the directivity of the reflected beam (beamforming gain) increases with the carrier frequency, as more elements can be fitted into the aperture.
The deployment of small-cell base stations is less attractive at higher frequencies, as coverage is limited. One RIS can replace one base station in a mmWave band, whereas multiple RISs are needed to replace one small cell at lower frequencies, as the coverage area is larger from such a base station (e.g., it could cover multiple rooms or streets).
The deployment of 5G mmWave networks is currently very limited, but once we identify the right use cases for these bands, we should also consider deploying networks using a combination of base stations and RIS.
The following video explains the fundamentals of RIS and demonstrates how effectively it can improve coverage at mmWave frequencies. The experiments were made using the XRifle Dynamic RIS and Developer Kit from TMYTEK.
Most of the populated parts of the world have cellular network coverage. You have likely seen base station antennas at both rooftops and in towers, but have you reflected on what the different boxes are?
In the following short video, I take you on a tour of a 4G cell site in Stockholm, where there are antennas and radios for the 700-900 MHz, 1.8+2.1 GHz, and 2.6 GHz bands.
We have now released the 45th episode of the podcast Wireless Future. It has the following abstract:
“6G should be for the many, not the few” is the final sentence of a recent book by William Webb. In this episode, Erik G. Larsson and Emil Björnson use this book as the starting point for a conversation on why and how wireless technology can improve its coverage. The end goal is to deliver ubiquitous connectivity, so we can use any wirelessly connected application anywhere at any time. The discussion starts at the conceptual level: Why do cellular networks have generations? How are visions for future generations created, and can they be trusted? Different ways to enhance future networks are then covered, from making optimal use of existing network resources to adding different kinds of new infrastructure where it is most needed. The episode was inspired by the book “The 6G Manifesto”, ISBN 9798338481936.
You can watch the video podcast on YouTube:
You can listen to the audio-only podcast at the following places:
We have now released the 44th episode of the podcast Wireless Future. It has the following abstract:
Coverage holes exist in cellular networks despite decades of wireless technology evolution, but new potential solutions are on the horizon. In this episode, Emil Björnson and Erik G. Larsson discuss network-controlled repeaters, reconfigurable intelligent surfaces, and half-duplex relays. Network-controlled repeaters have attracted particular attention from 3GPP in recent years; the conversation focuses on how these can create strong propagation paths through signal amplification. Implementation challenges related to synchronization, band selectivity, and stability are also covered. A detailed overview is provided in “Achieving Distributed MIMO Performance with Repeater-Assisted Cellular Massive MIMO”. Technical details can be found in: https://arxiv.org/pdf/2405.01074 and https://arxiv.org/pdf/2403.17908
You can watch the video podcast on YouTube:
You can listen to the audio-only podcast at the following places:
“6G should be for the many, not the few” is the final sentence of the book The 6G Manifesto by William Webb (published in October 2024). He presents a vision for the evolution of wireless communication technology driven by the goal of providing everyone with sufficiently fast connectivity to support their applications everywhere and at any time. This design goal was uncommon eight years ago when Webb described it in his book, “The 5G Myth“, which I reviewed previously. However, it has become quite mainstream since then. “Uniformly great service for everyone” [Marzetta, 2015] is the main motivation behind the research into Cell-free Massive MIMO systems. Ericsson uses the term limitless connectivity for its 6G vision that “mobile networks deliver limitless connectivity for all applications, allowing anyone and anything to connect truly anywhere and at any time“. The International Telecommunication Union (ITU) has dedicated one of its official 6G usage scenarios to ubiquitous connectivity, which should “provide digital inclusion for all by meaningfully connecting the rural and remote communities, further extending into sparsely populated areas, and maintaining the consistency of user experience between different locations including deep indoor coverage.” [Recommendation ITU-R M.2160-0, 2023]
The unique and interesting contribution of the book is the proposed way to realize the vision. The author claims that we already have the necessary network equipment to provide almost ubiquitous connectivity, but it is unavailable to the users because of poor system integration. Today, each device is subscribed to one out of multiple cellular networks, connected to at most one out of many WiFi networks, and seldom capable of using direct-to-satellite connectivity. Webb’s central claim is that we would reach far toward the goal of ubiquitous connectivity if each device would always be connected to the “best” network (cellular, WiFi, satellites, etc.) among the ones that can be reached at the given location. This approach makes intuitive sense if one recalls that your phone can often present a long list of WiFi networks (to which you lack the passwords) and that only emergency calls are possible in some insufficient coverage situations (those are done through another cellular network).
Some coverage holes will remain even if each device can seamlessly connect to any available network. Webb categorizes these challenges and suggests varying solutions. Rural coverage gaps can potentially be filled by using more cell towers, high-altitude platforms, and satellite connectivity. Urban not-spots can be removed by selective deployment of small cells. Issues in public transportation can be addressed by leaky feeders in tunnels and in-train WiFi connected to cellular networks through antennas deployed outside.
Enabling Multi-Network Coordination
Letting any device connect to any available wireless network is easier said than done. There are legal, commercial, and safety issues to overcome in practice, and no concrete solutions are provided in the book. Instead, the book focuses on the technological level, particularly how to control which network a device is connected to and manage handovers between networks. Webb argues that the device cannot decide what network to use because it lacks sufficient knowledge about them. Similarly, the mobile network operator the user is subscribed to cannot decide because it has limited knowledge of other networks. Hence, the proposed solution is to create a centralized multi-network coordinator, e.g., at the national level. This entity would act as a middleman with detailed knowledge about all networks so it can control the connectivity of all devices. The QUIC transport-layer protocol is suggested to be used to minimize the interruptions in data transfer when devices are moved between networks.
There are already limited-scale attempts to do what Webb suggests. One example is the eduroam system, which runs across thousands of research and education institutes. Once a device is connected to one such WiFi network, it will automatically connect to any of them. Another example is how recent iPhone products have been able to send emergency messages over satellites without requiring a subscription. Many phones can also use WiFi and cellular links simultaneously to make the connectivity more stable but at the price of shorter battery life. A third example is the company Icomera, which sells an aggregation platform for trains that integrates cellular and satellite links from multiple network operators to provide more stable connectivity for travelers.
The practical benefits of having a multi-network coordinator are clear and convincing. However, there is a risk that the computations and overhead signaling required to operate this coordination entity at scale will be enormous; thus, further research and development are required. The book will hopefully motivate researchers, businesses, and agencies to look deeper into these issues.
Poorly Substantiated Claims
The “Wheel diagram” from ITU with the usage scenarios ubiquitous connectivity listed at the bottom.
The main issue with the book is how it promotes its interesting vision by making some poorly substantiated claims and poking fun at the mainstream 6G visions, which it calls “5G-on-steroids”. Two chapters are dedicated to presenting and discussing cherry-picked statements from white papers by manufacturers and academics to give the impression that they are advocating something entirely different from what the book does. The fact that ubiquitous connectivity is one of the six 6G goals from the ITU is overlooked (see the “wheel diagram”). It is easy to find alternative quotes from Ericsson, Huawei, Nokia, Samsung, and academia that support the author’s vision, but he chose not to include them.
The “5G-on-steriods” moniker is used to ridicule the need for more capable 6G networks, in terms of bit rates, latency, and network capacity. One part of the argument is: “As seen with 5G, there are no new applications on the horizon, and even if there were 5G is capable of handling all of them. Operators do not want 6G because they perceive it will result in expense for them for no benefit.” This is a strange claim because operators are free to choose what technology and functionalities to deploy based on their business plans. In the 5G era, they have so far mainly deployed non-standalone 5G networks to manage the increasing data demand in their networks. The extra functionalities provided by standalone 5G networks (i.e., by adding a 5G core) are meant to create new revenue. Since most people already have cellphone subscriptions, new revenue streams require creating new services and devices for businesses or people. These things take time but remain on the horizon, as I discussed in a blog post about the current 5G disappointment.
Even in a pessimistic situation where no new monetizable services arise, one would think that the operators need more capacity in their networks since the worldwide data traffic grows year after year. However, based on the observation that the traffic growth rate has decayed for the last few years, the book claims that the data traffic will plateau before the end of the decade. No convincing evidence is provided to support this claim, but the author only refers to his previous book as if it establishes it as a fact. I hypothesize that the wireless traffic growth rate will converge to the overall Internet traffic growth value (it was 17% in 2024) because wireless technology is becoming deeply integrated into the general connectivity ecosystem, and the traffic will continue to grow just as the utilization of most other resources on Earth. To be fair, these are just two different speculations/predictions of the future, so we must wait and see what happens. The main issue is that the book uses the zero-traffic-growth assumption as the main argument for why the only problem that remains to be solved in the telecom world is global coverage, which is a shaky ground to build on.
Another peculiar claim in the book is that the 5G air interface was “anything but new” because it remained built on OFDM. This is ignorant of the fact that non-standalone 5G is all about exploiting Massive MIMO (an air interface breakthrough) to enable higher bitrates and cell capacity through spatial multiplexing, in addition to the extra bandwidth. This overlook becomes particularly clear when the book discusses how 6G might reach 10 times higher data rates than 5G. It is argued that 10 times more bandwidth is needed, which can only be found at much higher carrier frequencies where many more base stations are needed so that the deployment cost will grow rapidly. This chain of arguments is challenged by the fact that one can alternatively achieve 10x by using 2.5 times more spectrum and 4 times more MIMO layers, which is a plausible situation in the upper mid-band without the need for major densification.
My Verdict
The 6G Manifesto presents a compelling vision for reaching globally ubiquitous network coverage by first letting any devices connect to any wireless network already in place. A centralized multi-network coordinator must be created to govern such a system. The remaining coverage holes could be filled by investing in new infrastructure that covers only the missing pieces. It is worth investigating whether it is a scalable solution with reasonable operational costs and if one can build a meaningful legal and commercial framework around it. However, when reading the book, one must keep in mind that the descriptions of the current situation, the prospects of creating new services that generate more traffic and revenue, and the mainstream 6G visions are shaky and adapted to fit the book’s narrative.
We have now released the 43rd episode of the podcast Wireless Future. It has the following abstract:
There are many textbooks to choose between when learning the basics of wireless communications. In this episode, Erik G. Larsson and Emil Björnson discuss the recent book “Introduction to Multiple Antenna Communications and Reconfigurable Surfaces” that Emil has written together with Özlem Tugfe Demir. The conversation focuses on ten subtopics that are covered by the book and differentiates it from many previous textbooks. These are related to the dimensionality of physical constants, the choice of performance metrics, and the motivation behind OFDM signaling. Various system modeling characteristics are discussed, including how the antenna array geometry impacts the channel, dual-polarized signals, carrier frequency dependencies, and the connection between models for small-scale fading and radar cross-sections. The role of non-orthogonal multiple access, hybrid beamforming, and reconfigurable intelligent surfaces are also covered. The textbook is meant for teaching an introductory course on the topic and can be freely downloaded from https://www.nowpublishers.com/NowOpen
You can watch the video podcast on YouTube:
You can listen to the audio-only podcast at the following places: