The Confusion that Creates Disappointment with 5G

Five years into the 5G era, there is 5G coverage in many places, but the networks have yet to fulfill their initial promises. Many consumers and businesses are disappointed since they anticipated revolutionary changes: a world with ubiquitous ultra-fast connectivity, autonomous connected vehicles, and a digitalized society driven by many Internet-of-things sensors. While such use cases remain on the roadmap of possibilities, the telecom industry’s 5G marketing gave the impression that they would be instantly available in the 5G era. The core of the problem is that no clear distinction was made between 5G networks and 5G services. I will elaborate on these issues and how to avoid a similar disappointment with 6G.

The four stages of development

The development and lifespan of a cellular technology generation can be roughly divided into four stages:

The first stage defines the technical performance requirements in categories such as data rates, reliability, latency, and number of connected devices. These were called IMT-2020 in the case of 5G and were first published in 2015. The performance numbers were selected by studying prospective use cases and their potential connectivity needs. For example, a collection of European vendors, operators, verticals, and universities studied 12 test cases within the METIS project.

The second stage consists of developing a standard that is theoretically capable of reaching the IMT-2020 requirements, as well as developing standard-compliant practical implementations, including hardware and software. The first release of the 5G standard was finalized in 2018, and field trials were conducted both before and afterward.

The third stage entails deploying commercial networks. The first 5G-compliant networks were opened for customers in 2019, and the coverage has since increased from a few city centers per country to the majority of the population in many countries. The service offerings have focused on mobile broadband and fixed wireless access—the same services as in 4G, but with faster speed per device, more data per month in subscriptions, and fewer congestion issues.

The 5G network infrastructure also provides the foundation for companies and industries to design, test, and launch radically new connectivity services, particularly building on ultra-reliable low-latency communications (URLLC) and massive machine-type communication (mMTC). These are two 5G capabilities that did not exist in 4G but were added in the hope that they would lead to new services and revenue streams in the 5G era. Since every person already has a smartphone and is reluctant to pay more for their subscription, increased revenue is tightly connected to creating new service categories and connecting new devices. The service evolution is the fourth and final stage in the development, and there is a feedback loop to Stages 2 and 3 since the standard and network also evolve within each generation.

In a nutshell, 5G networks are the infrastructure that enables connectivity, while 5G services are new monetizable use cases of the networks. The current 5G disappointment was created because the public could not distinguish these things but expected to get the previously advertised 5G services when the 5G networks were launched. This happened even if 5G networks have been successful in delivering on the promised technical capabilities, such has higher data rates and capacity.

Deploying a cellular network is only the beginning. The network will exist for decades, and new services can gradually be developed.

Prospective versus real services

The prospective use cases identified in Stage 1 of the 5G development were nothing but a brainstorming list created by telecom researchers tasked with predicting what might be required ten years into the future. Unfortunately, the telecom industry began advertising them heavily to the public, giving the impression that these were the real services that 5G would support in Stage 4. Now, when 5G networks have existed for five years without any of these 5G-branded services, it is unsurprising that people and businesses have been disenchanted.

5G networks will hopefully enable some new “5G services” that can bring new revenue streams to the financially stressed mobile network operators. However, designing such services takes time and effort because one must create an end-to-end ecosystem with the required hardware, software, and business models. For example, wireless automation of a factory requires collaboration between the factory owner, manufacturers of factory equipment, vendors for both network infrastructure and devices, and the intended network operators.

The development of new 5G services was initially delayed because most telecom operators deployed non-standalone 5G networks, consisting of 5G base stations anchored to a legacy 4G core network infrastructure. New capabilities such as URLLC and mMTC cannot be used without a 5G core, which created a chicken-and-egg situation: New services could not be created without a 5G core, and the cost of deploying a 5G core could not be motivated since no new services were ready for launch.

Fortunately, the core networks have now begun to be upgraded, so testing of new use cases and services is possible. Since 2024, I have been an advisor for the Advanced Digitalisation program in Sweden, which provides co-funding for companies that collaborate in developing 5G services that fit their businesses. There is much enthusiasm for these activities but also an awareness that there are many missing pieces in the 5G service puzzles to be found.

Lessons learned for 6G

Based on these experiences, I get worried when I see 6G services already being discussed in publications, conferences, and press releases. The 6G development has just entered Stage 2. The IMT-2030 requirements are taking form, and during the next five years, the 6G network technology will be standardized and developed to reach those goals. We can expect 6G network deployments to begin around 2030 and initially provide better performance and efficiency for already existing services. It is not until 2035 that entirely new services might take off and hopefully create new revenue streams for the network operators. These services might require new functionalities provided by 6G, such as edge intelligence, radar sensing, and ubiquitous coverage. It could also be related to augmented reality glasses for consumers or digitalization/automation of businesses or society—we can only speculate at this point.

Right now, the essential thing is to develop a 6G technology that can provide even greater communication, localization, and sensing performance. The future will tell which innovative services can be built on top of the 6G network infrastructure, when they will be available, and how to monetize them. We must remember that revolutionary changes happen slowly, are often only observable retrospectively, and usually differ greatly from the initial visions.

If the telecom industry describes futuristic use cases in its initial 6G marketing, it will likely create a 6G dissatisfaction that resembles the current 5G disappointment. Hence, I urge the telecom industry to work in parallel with developing new 5G services that can be launched in the coming years and 6G network technology that can be rolled out in the next decade but focus the marketing on the 5G services. It is first when 6G networks reach the trial phase that the development of 6G services can truly begin, so there is something concrete to spread to the public.

There are many prospective use cases for 6G networks, including mobile augmented reality. However, it will take a long time before any futuristic service materializes.

The Holographic Spinning Antenna

In this post I propose a new type of moveable antenna that I call the Holographic Spinning Antenna (HSA), a concept where an antenna is spun at extremely high rates, which appears novel with no previous published work. This spinning antenna could fit into the trend of reconfigurable components in RF systems such as reconfigurable intelligent surfaces, moveable, rotatable and fluid antennas. Whether this spinning antenna is physically feasible, and even then, brings any additional benefit, remains to be answered. I describe the HSA and contribute with a toy example below.

Background

Some time ago the YouTube algorithm recommended me a commercial for a “Holographic Led Fan” (HLF), which exists under other names such as “holographic fan display” etc. A HLF consists of an axis driven by an electrical motor, on which at least one arm with a 1-dimensional LED stripe is mounted. Like a fan, but the blades are LED stripes instead. When the fan is turned off, but the LED is on, you see a 1D stripe of lights. When you turn on the fan and it spins sufficiently fast, because of the retinal persistence of our eyes, the LEDs are blurred by motion through space and we see a floating 2D image instead of the 1D stripes.

If you work with wireless communications these 1D LED stripes could induce some déjà vu: The LED elements look quite like planar antenna elements and the arms of the HLF look like a uniform linear array (ULA). My question is: Is there an equivalent effect of spinning a ULA sufficiently fast?

You can also call it the “propeller” or “windmill” or “fan” antenna, if that is more amusing!

Can we transmit more information if an antenna is spun sufficiently fast? Clearly, spinning the LEDs creates a new degree of freedom (angle) which can encode a 2D image.

To keep it simple I will consider block fading, and my idea is that “sufficiently fast” is at least 2 revolutions per channel coherence time. I doubt this is fast enough to cause any effect of persistence in the antennas, but it can still lead to interesting questions. I start with a conceptual description first and then a toy example.

Conceptual Description

Under the idealistic assumptions of block fading, the idea is to transmit or receive pilots the first revolution, then to transmit or receive data over the same channel instance the second revolution. In this way, you increase the channel diversity in a coherence block, but the trade-off is lower SNR. Of course, if you have multiple antennas symmetrically placed on the disc (as in the figure below) you can spin half a revolution and then transmit half a revolution, which would allow for lower RPM. Compared to the “windmill” above, I suggest a slightly more feasible construction below:

ULA on a spinning disc, with symmetrically placed antenna elements.

In this construction I envision 5 main components:

  • A circular disc.
  • At least 1 antenna on the disc. In the illustration there are 6 antennas placed symmetrically, 3 on each side of the disc.
  • A way to spin the disc extremely fast. Maybe in a capsule of vacuum and some maglev technique? See also flywheel energy storage.
  • A way to read and write the signals of the antennas on the spinning disc.
  • A way to power the antennas on the spinning disc.

It has been demonstrated that one can spin a compact disc (CD) up to 25 000 RPM, which is 2.4 milliseconds per revolution. I will not delve into the physics of spinning discs for now and instead refer to the MythBusters as evidence:

Toy Example

Assume Rayleigh block fading, coherence time is 2T discrete symbols, data and additive noise are normalized complex Gaussian: x\sim\mathcal{CN}(0,1), w_t\sim\mathcal{CN}(0,1), g\sim\mathcal{CN}(0,\beta). We consider a single antenna and compare the HSA spinning case against the typical static case.

Note that the toy example is similar to a switched array, but the spinning antenna is fundamentally different in that you can also transmit/receive continuously in space and not just at the discrete antenna locations in a switched array.

Static antenna

In the static case we have a single antenna fixed in space, depicted below.

Static planar antenna element.

At t=1 the channel g is estimated perfectly. At remaining 2T-1 symbols you transmit/receive the same symbol of data x over the same channel instance g:

(1)   \begin{equation*}y_t = gx + w_t\end{equation*}

s.t.

(2)   \begin{equation*}\mathbf{y} = \mathbf{g}x + \mathbf{w}\sim\mathcal{CN}\left(\mathbf{g}x,\mathbf{I}\right), \mathbf{g}=\mathbf{1}g\in \mathbb{C}^{2T-1},\end{equation*}

where \mathbf{w}=[w_1,\dots,w_{2T}]^T, \mathbf{1}=[1,\dots,1]^T. Then apply MR combining with \mathbf{a} = \frac{\mathbf{g}}{|\mathbf{g}|^2} to the input

(3)   \begin{equation*}\mathbf{a}^H\mathbf{y} = x + \mathbf{a}^H\mathbf{w},\end{equation*}

where

(4)   \begin{equation*}\mathbf{a}^H\mathbf{w} \sim \mathcal{CN}(0, \mathbf{a}^H\mathbf{a}) = \mathcal{CN}\left(0, \frac{1}{(2T-1)|g|^2}\right),\end{equation*}

giving rate

(5)   \begin{equation*}R = \text{log}_2\left(1 + (2T-1)|g|^2\right),\end{equation*}

and outage probability

(6)   \begin{equation*}P(R<r) = P\left(\mathcal{X}_2^2 < \frac{2^r - 1}{\beta/2(2T-1)}\right),\end{equation*}

which has a closed form expression, excluded here.

Spinning antenna

In the spinning case we have an antenna on a rotating disc. For the first T symbols you estimate T independent channel instances perfectly. The last T symbols you transmit/receive the same symbol of data over each channel.

The disc takes T symbols per revolution.

Note that the assumption of T independent channels necessitates that the points of measurement are separated by \lambda/2, which in turn means the maximum wavelength is two times the disc diameter. In the spinning case the channel g is new for each of the final T measurements:

(7)   \begin{equation*}y_t = g_tx + w_t.\end{equation*}

With MR combining as previously, one gets

(8)   \begin{equation*}\mathbf{a}^H\mathbf{w} \sim  \mathcal{CN}\left(0, \frac{1}{\sum_{t=1}^T|g_t|^2}\right),\end{equation*}

with rate

(9)   \begin{equation*}R = \text{log}_2\left(1 + \sum_{t=1}^T|g_t|^2\right),\end{equation*}

and outage probability

(10)   \begin{equation*}P(R<r) = P\left(\mathcal{X}_{2T}^2 < \frac{2^r - 1}{\beta/2}\right).\end{equation*}

Simulation results

I simulate \mathbb{E}[R] using 1000 monte-carlo trials, and compute the probability of outage P(R<1) numerically with T=10. We see that the spinning antenna gets a much lower probability of outage for r=1 than the static if \beta is sufficiently high. However, the static has a higher expected rate, although with higher variance in rate than the spinning antenna.

Probability of outage against beta. Beta is also the SNR.
Expected(average) rate against beta. Beta is also the SNR. I use 1000 trials.

The take-away is that the static antenna has a higher rate on average, but if you want stronger guarantees on your rate the spinning antenna could be a better choice.

Multi-antenna case

I believe you can find additional interesting things you if you assume two symmetrically placed antennas, comparing the static and spinning case. For example, in the spinning case you can effectively beamform in 2 dimensions using only 2 antennas, which you cannot do in the static case.

Optimal power control

If average power is limited per coherence block, in the spinning case you should get some water filling problem where you want to spend most power on the best channels, which you wont get in the static case.

Summary

I propose the “Holographic Spinning Antenna”, which is an antenna that spins extremely fast (compared to traditional spinning antennas such as radar). From my minimal toy example and simulation results I suggest that the spinning antenna has a lower probability of outage, but not necessarily a higher ergodic rate compared to the static antenna. The HSA could be physically possible if it is the size of a CD and could be used for wavelengths up to 120*2 millimeters.

I would say there are two main questions to be answered:

  • Does the HSA have any significant benefit compared to a static/switched array motivating the added physical complexity of spinning the antenna?
  • Is the HSA physically possible?

I encourage anyone curious enough to keep exploring the concept of a spinning antenna under better assumptions and models.

Finally, thanks to Emil Björnson and my main supervisor Erik G. Larsson for letting me post this in the blog, and for letting me entertain my idea of a spinning antenna.

Have a great new year,

Martin Dahl,
PhD Student at Linköping University,
Department of Electrical Engineering (ISY),
Division of Communication Systems (CommSys),
martin.dahl@liu.se

Spinning antennas vs Static antennas. Not my figure, found it online!

Episode 41: 6G in the Upper Mid-Band

We have now released the 41st episode of the podcast Wireless Future. It has the following abstract:

New cellular network deployments are often associated with new frequency bands. 6G will likely use the upper mid-band from 7-24 GHz. It is called the “golden band” since it provides more spectrum than in current 5G networks and features decent propagation conditions. In this episode, Erik G. Larsson and Emil Björnson discuss the coexistence issues that must be overcome when operating in this band and how much spectrum we can expect to utilize. The future role of multi-antenna technology and its associated challenges are detailed, including the emerging “Gigantic MIMO” term. The prospects of exploiting near-field propagation effects in 6G and the road towards distributed cell-free MIMO are also covered. You can read Emil’s paper about Gigantic MIMO and Nokia’s white paper about coverage evaluation.

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

Open RAN: Success or Failure?

The 3GPP standard for mobile communications only specifies the basic functionalities needed to make a device from any vendor (e.g., Apple, Samsung, Xiaomi) communicate with network infrastructure from any vendor (e.g., Ericsson, Nokia, Huawei). The exact implementation details (e.g., hardware and algorithms) can be selected freely—a setup selected to promote innovation and competition between vendors. Since the internal interfaces (blue cables in the picture) between radio units, lower-layer processor units, and higher-layer processor units are unspecified, a telecom operator must buy its entire radio access network (RAN) from a single vendor.

Single-vendor networks have several inherent benefits: jointly optimized hardware and software, ease of deployment, and clarity of who to blame when something fails. However, it also leads to vendor lock-in, where the operator has little bargaining power when buying new equipment and cannot benefit from new innovations made by other vendors.

Motivation for Open RAN

Today, there are only a few vendors capable of delivering a complete network, and recent geopolitical developments have reduced the viable options in each country even further. To create more options and market competition, the O-RAN Alliance has since 2018 worked to define what components a network must contain (e.g., radio unit, distributed unit, centralized unit) and specify open interfaces between them (e.g., fronthaul, midhaul, and APIs). This is called the Open RAN specification, and it is meant to enable new companies to enter the market and specialize in offering a limited number of components. If successful, this will allow operators to mix-and-match hardware and software from multiple vendors in the same network.

There has been much hype about Open RAN and the innovations it might bring over the past few years. Several people have even tried to convince me that future networks cannot work without it. Nevertheless, six years into the Open RAN era, there are barely any multi-vendor networks, and many people were surprised when AT&T decided to build a new nationwide Open RAN network that only consists of $14 billion of Ericsson equipment. This has been called “Single-vendor Open RAN” and has led to claims from independent experts that Open RAN is failing miserably [see: Light Reading, May 2024; telecoms.com, Aug 2024].

Success criteria for Open RAN

I think the Open RAN development mostly goes as intended, even if promises made in various press releases are yet to be fulfilled. The RAN technology is highly complex, so one cannot expect new vendors to be able to compete with the traditional giants immediately. A mix-and-match network using hardware and software from new vendors will likely not provide competitive performance any time soon. This is reminiscent of how Apple has tried for years to build high-performing 5G baseband chips for their phones but is still far behind Qualcomm.

The potential success of Open RAN should not be measured by the number of new multi-vendor networks being deployed in the next few years but by how the increased competition and intelligent network virtualization lead to cost reductions for the operators—both when deploying new networks and later when network components must be replaced due to hardware failures or upgraded with new functionalities. It will naturally take time for these potential benefits to materialize because most operators have already deployed traditional 4G and 5G networks and will not invest in new ”greenfield” deployments anytime soon. Perhaps the first large-scale multi-vendor deployments with hardware from untraditional vendors will take place in geographical regions currently lacking 5G connectivity and where cost savings are more important than top-of-the-line performance. In many other places, I believe Open RAN will not be a commercial reality until 6G networks are rolled out.

The O-RAN Alliance identifies four benefits of Open RAN: openness, interoperability, virtualization, and AI automation. Single-vendor Open RAN compliant networks will use the latter two benefits from day one to reduce hardware/operational costs and enable new services. The operators might also benefit from the former two in the long run, particularly for components that become commodified. Virtualization and AI automation are, of course, features that a state-of-the-art closed RAN deployment also supports—they are not unique features for Open RAN but have been researched under the “Cloud RAN” name for a long period. However, AT&T has demonstrated that there is little incentive to build new networks in the traditional way when one can get openness and interoperability as well.

In conclusion, Open RAN is successful in the sense of being a natural feature in any new network deployment. However, the hyped interface openness and multi-vendor support are not the transformative aspects, and we cannot expect a major uptake until greenfield 6G deployment commences.

New video on 6G standardization

Much wireless communication research in the past five years has been motivated by 6G needing the results. The 6G standardization has recently begun, so in the coming years, we will see which “6G candidate technologies” will be utilized and which will not. Research papers often focus on revolutionary new features, while technology development is often evolutionary since the demand for new features comes gradually.

Although we have yet to determine what technology components will be used, there is much certainty around things like standardization timelines, new feature categories, spectrum candidates, performance metrics, and the interplay between different stakeholders. I explain this in a new 18-minute video about 6G, where I answer what 6G truly is, why it is needed, and how it is developed.

From Massive to Gigantic MIMO in 6G

For the last five years, most of the research into wireless communications has been motivated by its potential role in 6G. As standardization efforts begin in 3GPP this year, we will discover which of the so-called “6G enablers” has attracted the industry’s attention. Probably, only a few new technology components will be introduced in the first 6G networks in 2029, and a few more will be gradually introduced over the next decade. In this post, I will provide some brief predictions on what to expect.

6G Frequency Band

In December 2023, the World Radiocommunication Conference identified three potential 6G frequency bands, which will be analyzed until the next conference in 2027 (see the image below). Hence, the first 6G networks will operate in one of these bands if nothing unforeseen happens. The most interesting new band is around 7.8 GHz, where 650-1275 MHz of spectrum might become available, depending on the country.

This band belongs to the upper mid-band, the previously overlooked range between the sub-7 GHz and the mmWave bands considered in 5G. It has recently been called the golden band since it offers more spectrum than current 5G networks in the 3.5 GHz band and much better propagation conditions than at mmWave frequencies. But honestly, the new spectrum availability is quite underwhelming: It is around twice the amount that 5G networks will already be using in 2029. Hence, we cannot expect any large capacity boost just from introducing these new bands.

Gigantic MIMO arrays

However, the new frequency bands will enable us to deploy many more antenna elements in the same form factor as current antenna arrays. Since the arrays are two-dimensional, the number grows quadratically with the carrier frequency. Hence, we can expect five times as many antennas in the 7.8 GHz band as at 3.5 GHz, which enables spatial multiplexing of five times more data. When combined with twice the amount of spectrum, we can reach an order of magnitude (10×) higher capacity in the first 6G networks than in 5G.

Since the 5G antenna technology is called “Massive MIMO” and 6G will utilize much larger antenna numbers, we need to find a new adjective. I think “Gigantic MIMO“, abbreviated as gMIMO, is a suitable term. The image below illustrates a 0.5 × 0.5 m array with 25 × 25 antenna elements in the 7.8 GHz band. Since practical base stations often have antenna numbers being a power of two, it is likely we will see at least 512 antenna elements in 6G.

During the last few months, my postdocs and I have looked into what the gMIMO technology could realistically achieve in 6G. We have written a magazine-style paper to discuss the upper mid-band in detail, describe how to reach the envisioned 6G performance targets, and explain what deployment practices are needed to utilize the near-field beamfocusing phenomenon for precise communication, localization, and sensing. We also identify five open research challenges, which we recommend you look into if you want to impact the actual 6G standardization and development.

You can download the paper here: Emil Björnson, Ferdi Kara, Nikolaos Kolomvakis, Alva Kosasih, Parisa Ramezani, and Murat Babek Salman, “Enabling 6G Performance in the Upper Mid-Band by Transitioning From Massive to Gigantic MIMO,” arXiv:2407.05630.

Rethinking Wireless Repeaters

In what ways could we improve cellular-massive-MIMO based 5G? Well, to start with, this technology is already pretty good. But coverage holes, and difficulties to send multiple streams to multi-antenna users because of insufficient channel rank, remain issues.

Perhaps the ultimate solution is distributed MIMO (also known as cell-free massive MIMO). But while this is at heart a powerful technology, installing backhaul seems dreadfully expensive, and achieving accurate phase-alignment for coherent multiuser beamforming on downlink is a difficult technical problem. Another option is RIS – but they have large form factors and require a lot of training and control overhead, and probably, in practice, some form of active filtering to make them sufficiently band-selective. 

A different, radically new approach is to deploy large numbers of physically small and cheap wireless repeaters, that receive and instantaneously retransmit signals – appearing as if they were ordinary scatterers in the channel, but with amplification. Repeaters, as such, are deployed today already but only in niche use cases. Could they be deployed at scale, in swarms, within the cells? What would be required of the repeaters, and how well could a repeater-assisted cellular massive MIMO system work, compared to distributed MIMO? What are the fundamental limits of this technology? 

At last, some significant new research directions for the wireless PHY community!

Paper: https://arxiv.org/pdf/2406.00142

News – commentary – mythbusting