Book Review: The 6G Manifesto

6G should be for the many, not the few” is the final sentence of the book The 6G Manifesto by William Webb (published in October 2024). He presents a vision for the evolution of wireless communication technology driven by the goal of providing everyone with sufficiently fast connectivity to support their applications everywhere and at any time. This design goal was uncommon eight years ago when Webb described it in his book, “The 5G Myth“, which I reviewed previously. However, it has become quite mainstream since then. “Uniformly great service for everyone” [Marzetta, 2015] is the main motivation behind the research into Cell-free Massive MIMO systems. Ericsson uses the term limitless connectivity for its 6G vision that “mobile networks deliver limitless connectivity for all applications, allowing anyone and anything to connect truly anywhere and at any time“. The International Telecommunication Union (ITU) has dedicated one of its official 6G usage scenarios to ubiquitous connectivity, which should “provide digital inclusion for all by meaningfully connecting the rural and remote communities, further extending into sparsely populated areas, and maintaining the consistency of user experience between different locations including deep indoor coverage.” [Recommendation ITU-R M.2160-0, 2023]

The unique and interesting contribution of the book is the proposed way to realize the vision. The author claims that we already have the necessary network equipment to provide almost ubiquitous connectivity, but it is unavailable to the users because of poor system integration. Today, each device is subscribed to one out of multiple cellular networks, connected to at most one out of many WiFi networks, and seldom capable of using direct-to-satellite connectivity. Webb’s central claim is that we would reach far toward the goal of ubiquitous connectivity if each device would always be connected to the “best” network (cellular, WiFi, satellites, etc.) among the ones that can be reached at the given location. This approach makes intuitive sense if one recalls that your phone can often present a long list of WiFi networks (to which you lack the passwords) and that only emergency calls are possible in some insufficient coverage situations (those are done through another cellular network).

Some coverage holes will remain even if each device can seamlessly connect to any available network. Webb categorizes these challenges and suggests varying solutions. Rural coverage gaps can potentially be filled by using more cell towers, high-altitude platforms, and satellite connectivity. Urban not-spots can be removed by selective deployment of small cells. Issues in public transportation can be addressed by leaky feeders in tunnels and in-train WiFi connected to cellular networks through antennas deployed outside.

Enabling Multi-Network Coordination

Letting any device connect to any available wireless network is easier said than done. There are legal, commercial, and safety issues to overcome in practice, and no concrete solutions are provided in the book. Instead, the book focuses on the technological level, particularly how to control which network a device is connected to and manage handovers between networks. Webb argues that the device cannot decide what network to use because it lacks sufficient knowledge about them. Similarly, the mobile network operator the user is subscribed to cannot decide because it has limited knowledge of other networks. Hence, the proposed solution is to create a centralized multi-network coordinator, e.g., at the national level. This entity would act as a middleman with detailed knowledge about all networks so it can control the connectivity of all devices. The QUIC transport-layer protocol is suggested to be used to minimize the interruptions in data transfer when devices are moved between networks.

There are already limited-scale attempts to do what Webb suggests. One example is the eduroam system, which runs across thousands of research and education institutes. Once a device is connected to one such WiFi network, it will automatically connect to any of them. Another example is how recent iPhone products have been able to send emergency messages over satellites without requiring a subscription. Many phones can also use WiFi and cellular links simultaneously to make the connectivity more stable but at the price of shorter battery life. A third example is the company Icomera, which sells an aggregation platform for trains that integrates cellular and satellite links from multiple network operators to provide more stable connectivity for travelers.

The practical benefits of having a multi-network coordinator are clear and convincing. However, there is a risk that the computations and overhead signaling required to operate this coordination entity at scale will be enormous; thus, further research and development are required. The book will hopefully motivate researchers, businesses, and agencies to look deeper into these issues.

Poorly Substantiated Claims

The “Wheel diagram” from ITU with the usage scenarios ubiquitous connectivity listed at the bottom.

The main issue with the book is how it promotes its interesting vision by making some poorly substantiated claims and poking fun at the mainstream 6G visions, which it calls “5G-on-steroids”. Two chapters are dedicated to presenting and discussing cherry-picked statements from white papers by manufacturers and academics to give the impression that they are advocating something entirely different from what the book does. The fact that ubiquitous connectivity is one of the six 6G goals from the ITU is overlooked (see the “wheel diagram”). It is easy to find alternative quotes from Ericsson, Huawei, Nokia, Samsung, and academia that support the author’s vision, but he chose not to include them.

The “5G-on-steriods” moniker is used to ridicule the need for more capable 6G networks, in terms of bit rates, latency, and network capacity. One part of the argument is: “As seen with 5G, there are no new applications on the horizon, and even if there were 5G is capable of handling all of them. Operators do not want 6G because they perceive it will result in expense for them for no benefit.” This is a strange claim because operators are free to choose what technology and functionalities to deploy based on their business plans. In the 5G era, they have so far mainly deployed non-standalone 5G networks to manage the increasing data demand in their networks. The extra functionalities provided by standalone 5G networks (i.e., by adding a 5G core) are meant to create new revenue. Since most people already have cellphone subscriptions, new revenue streams require creating new services and devices for businesses or people. These things take time but remain on the horizon, as I discussed in a blog post about the current 5G disappointment.

Even in a pessimistic situation where no new monetizable services arise, one would think that the operators need more capacity in their networks since the worldwide data traffic grows year after year. However, based on the observation that the traffic growth rate has decayed for the last few years, the book claims that the data traffic will plateau before the end of the decade. No convincing evidence is provided to support this claim, but the author only refers to his previous book as if it establishes it as a fact. I hypothesize that the wireless traffic growth rate will converge to the overall Internet traffic growth value (it was 17% in 2024) because wireless technology is becoming deeply integrated into the general connectivity ecosystem, and the traffic will continue to grow just as the utilization of most other resources on Earth. To be fair, these are just two different speculations/predictions of the future, so we must wait and see what happens. The main issue is that the book uses the zero-traffic-growth assumption as the main argument for why the only problem that remains to be solved in the telecom world is global coverage, which is a shaky ground to build on.

Another peculiar claim in the book is that the 5G air interface was “anything but new” because it remained built on OFDM. This is ignorant of the fact that non-standalone 5G is all about exploiting Massive MIMO (an air interface breakthrough) to enable higher bitrates and cell capacity through spatial multiplexing, in addition to the extra bandwidth. This overlook becomes particularly clear when the book discusses how 6G might reach 10 times higher data rates than 5G. It is argued that 10 times more bandwidth is needed, which can only be found at much higher carrier frequencies where many more base stations are needed so that the deployment cost will grow rapidly. This chain of arguments is challenged by the fact that one can alternatively achieve 10x by using 2.5 times more spectrum and 4 times more MIMO layers, which is a plausible situation in the upper mid-band without the need for major densification.

My Verdict

The 6G Manifesto presents a compelling vision for reaching globally ubiquitous network coverage by first letting any devices connect to any wireless network already in place. A centralized multi-network coordinator must be created to govern such a system. The remaining coverage holes could be filled by investing in new infrastructure that covers only the missing pieces. It is worth investigating whether it is a scalable solution with reasonable operational costs and if one can build a meaningful legal and commercial framework around it. However, when reading the book, one must keep in mind that the descriptions of the current situation, the prospects of creating new services that generate more traffic and revenue, and the mainstream 6G visions are shaky and adapted to fit the book’s narrative.

Episode 43: Ten Things That Are Missing in Many Textbooks

We have now released the 43rd episode of the podcast Wireless Future. It has the following abstract:

There are many textbooks to choose between when learning the basics of wireless communications. In this episode, Erik G. Larsson and Emil Björnson discuss the recent book “Introduction to Multiple Antenna Communications and Reconfigurable Surfaces” that Emil has written together with Özlem Tugfe Demir. The conversation focuses on ten subtopics that are covered by the book and differentiates it from many previous textbooks. These are related to the dimensionality of physical constants, the choice of performance metrics, and the motivation behind OFDM signaling. Various system modeling characteristics are discussed, including how the antenna array geometry impacts the channel, dual-polarized signals, carrier frequency dependencies, and the connection between models for small-scale fading and radar cross-sections. The role of non-orthogonal multiple access, hybrid beamforming, and reconfigurable intelligent surfaces are also covered. The textbook is meant for teaching an introductory course on the topic and can be freely downloaded from https://www.nowpublishers.com/NowOpen

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

Can Explainable AI Make Wireless Networks Better?

The growing emphasis on “explainable AI” in recent years highlights a fundamental issue: many previous AI algorithms have operated as black boxes, with little understanding of what information is extracted and utilized from the training data. While this opaqueness may be tolerated in fields like natural language processing or computer vision, where traditional algorithms have struggled, applying unexplainable AI to an established field like wireless communications is both unnecessary and counterproductive.

In wireless communications, decades of rigorous research and experience from real-world network operation have produced well-functioning, human-crafted algorithms based on established models (e.g., for wave propagation and randomness) and optimized methodologies (e.g., from information theory). If AI is to surpass these state-of-the-art solutions, we must understand why: Is it uncovering previously overlooked characteristics in real-world data, or is it merely exploiting artifacts in a synthetic dataset? The latter is a significant risk that the research community must be mindful of, particularly when training data is generated from simplified numerical models that don’t fully capture real-world complexities but only resemble measured data in some statistical sense.

I have identified three characteristics that are particularly promising to learn from data to improve the operation of the physical and MAC layers in future wireless networks.

  1. User mobility: People typically move through the coverage area of a wireless network in a structured manner, but it can be hard to track and predict the mobility using traditional signal processing methods, except in line-of-sight scenarios. AI algorithms can learn complex maps (e.g., channel charts) and use them for predictive tasks such as beam tracking, proactive handover, and rate adaptation.
  2. User behaviors: People are predictable when it comes to when, how, and where they use particular user applications, as well as what content they are looking for. An AI algorithm can learn such things and utilize them to enhance the user experience through proactive caching or to save energy by turning off hardware components in low-traffic situations.
  3. Application behaviors: The data that must be communicated wirelessly to run an application is generally bursty, even if the application is used continuously by the user. The corresponding traffic pattern can be learned by AI and used for proactive scheduling and other resource allocation tasks. Auto-encoders can also be utilized for data compression, an instance of semantic communication.

A cellular network that utilizes these characteristics will likely implement different AI algorithms in each cell because the performance benefits come from tuning parameters based on the local conditions. AI can also be used at the network operation level for automation and to identify anomalies.

Many of these features mentioned above already exist in 5G networks but have been added on top of the standard at the vendor’s discretion. The vision for 6G as an “AI-native” network is to provide a standardized framework for data collection, sharing, and utilization across the radio access network. This could enable AI-driven optimizations at a scale previously unattainable, unlocking the full potential of AI in wireless communications. When this happens, we must not forget about explainability: there must at least be a high-level understanding of what characteristics are learned from the data and why they can be utilized to make the network more efficient.

I give some further examples of AI opportunities in wireless networks, as well as associated risks and challenges, in the following video:

Episode 42: The Contours of 6G are Taking Shape

We have now released the 42nd episode of the podcast Wireless Future. It has the following abstract:

Even if the 6G standardization is just beginning, the last five years of intensive research have illuminated the contours of the next-generation technology. In this episode, Emil Björnson and Erik G. Larsson discuss the recent paper “6G takes shape” written by leading researchers at UT Austin and Qualcomm. The conversation covers lessons learned from 5G, the potential role of new frequency bands and waveforms, and new coding schemes and forms of MIMO. The roles of machine learning and generative AI, as well as satellite integration and Open RAN, are also discussed. The original paper by Jeffrey G. Andrews, Todd E. Humphreys, and Tingfang Ji will appear in the IEEE BITS magazine, and the preprint is available on arXiv.

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

The Confusion that Creates Disappointment with 5G

Five years into the 5G era, there is 5G coverage in many places, but the networks have yet to fulfill their initial promises. Many consumers and businesses are disappointed since they anticipated revolutionary changes: a world with ubiquitous ultra-fast connectivity, autonomous connected vehicles, and a digitalized society driven by many Internet-of-things sensors. While such use cases remain on the roadmap of possibilities, the telecom industry’s 5G marketing gave the impression that they would be instantly available in the 5G era. The core of the problem is that no clear distinction was made between 5G networks and 5G services. I will elaborate on these issues and how to avoid a similar disappointment with 6G.

The four stages of development

The development and lifespan of a cellular technology generation can be roughly divided into four stages:

The first stage defines the technical performance requirements in categories such as data rates, reliability, latency, and number of connected devices. These were called IMT-2020 in the case of 5G and were first published in 2015. The performance numbers were selected by studying prospective use cases and their potential connectivity needs. For example, a collection of European vendors, operators, verticals, and universities studied 12 test cases within the METIS project.

The second stage consists of developing a standard that is theoretically capable of reaching the IMT-2020 requirements, as well as developing standard-compliant practical implementations, including hardware and software. The first release of the 5G standard was finalized in 2018, and field trials were conducted both before and afterward.

The third stage entails deploying commercial networks. The first 5G-compliant networks were opened for customers in 2019, and the coverage has since increased from a few city centers per country to the majority of the population in many countries. The service offerings have focused on mobile broadband and fixed wireless access—the same services as in 4G, but with faster speed per device, more data per month in subscriptions, and fewer congestion issues.

The 5G network infrastructure also provides the foundation for companies and industries to design, test, and launch radically new connectivity services, particularly building on ultra-reliable low-latency communications (URLLC) and massive machine-type communication (mMTC). These are two 5G capabilities that did not exist in 4G but were added in the hope that they would lead to new services and revenue streams in the 5G era. Since every person already has a smartphone and is reluctant to pay more for their subscription, increased revenue is tightly connected to creating new service categories and connecting new devices. The service evolution is the fourth and final stage in the development, and there is a feedback loop to Stages 2 and 3 since the standard and network also evolve within each generation.

In a nutshell, 5G networks are the infrastructure that enables connectivity, while 5G services are new monetizable use cases of the networks. The current 5G disappointment was created because the public could not distinguish these things but expected to get the previously advertised 5G services when the 5G networks were launched. This happened even if 5G networks have been successful in delivering on the promised technical capabilities, such has higher data rates and capacity.

Deploying a cellular network is only the beginning. The network will exist for decades, and new services can gradually be developed.

Prospective versus real services

The prospective use cases identified in Stage 1 of the 5G development were nothing but a brainstorming list created by telecom researchers tasked with predicting what might be required ten years into the future. Unfortunately, the telecom industry began advertising them heavily to the public, giving the impression that these were the real services that 5G would support in Stage 4. Now, when 5G networks have existed for five years without any of these 5G-branded services, it is unsurprising that people and businesses have been disenchanted.

5G networks will hopefully enable some new “5G services” that can bring new revenue streams to the financially stressed mobile network operators. However, designing such services takes time and effort because one must create an end-to-end ecosystem with the required hardware, software, and business models. For example, wireless automation of a factory requires collaboration between the factory owner, manufacturers of factory equipment, vendors for both network infrastructure and devices, and the intended network operators.

The development of new 5G services was initially delayed because most telecom operators deployed non-standalone 5G networks, consisting of 5G base stations anchored to a legacy 4G core network infrastructure. New capabilities such as URLLC and mMTC cannot be used without a 5G core, which created a chicken-and-egg situation: New services could not be created without a 5G core, and the cost of deploying a 5G core could not be motivated since no new services were ready for launch.

Fortunately, the core networks have now begun to be upgraded, so testing of new use cases and services is possible. Since 2024, I have been an advisor for the Advanced Digitalisation program in Sweden, which provides co-funding for companies that collaborate in developing 5G services that fit their businesses. There is much enthusiasm for these activities but also an awareness that there are many missing pieces in the 5G service puzzles to be found.

Lessons learned for 6G

Based on these experiences, I get worried when I see 6G services already being discussed in publications, conferences, and press releases. The 6G development has just entered Stage 2. The IMT-2030 requirements are taking form, and during the next five years, the 6G network technology will be standardized and developed to reach those goals. We can expect 6G network deployments to begin around 2030 and initially provide better performance and efficiency for already existing services. It is not until 2035 that entirely new services might take off and hopefully create new revenue streams for the network operators. These services might require new functionalities provided by 6G, such as edge intelligence, radar sensing, and ubiquitous coverage. It could also be related to augmented reality glasses for consumers or digitalization/automation of businesses or society—we can only speculate at this point.

Right now, the essential thing is to develop a 6G technology that can provide even greater communication, localization, and sensing performance. The future will tell which innovative services can be built on top of the 6G network infrastructure, when they will be available, and how to monetize them. We must remember that revolutionary changes happen slowly, are often only observable retrospectively, and usually differ greatly from the initial visions.

If the telecom industry describes futuristic use cases in its initial 6G marketing, it will likely create a 6G dissatisfaction that resembles the current 5G disappointment. Hence, I urge the telecom industry to work in parallel with developing new 5G services that can be launched in the coming years and 6G network technology that can be rolled out in the next decade but focus the marketing on the 5G services. It is first when 6G networks reach the trial phase that the development of 6G services can truly begin, so there is something concrete to spread to the public.

There are many prospective use cases for 6G networks, including mobile augmented reality. However, it will take a long time before any futuristic service materializes.

The Holographic Spinning Antenna

In this post I propose a new type of moveable antenna that I call the Holographic Spinning Antenna (HSA), a concept where an antenna is spun at extremely high rates, which appears novel with no previous published work. This spinning antenna could fit into the trend of reconfigurable components in RF systems such as reconfigurable intelligent surfaces, moveable, rotatable and fluid antennas. Whether this spinning antenna is physically feasible, and even then, brings any additional benefit, remains to be answered. I describe the HSA and contribute with a toy example below.

Background

Some time ago the YouTube algorithm recommended me a commercial for a “Holographic Led Fan” (HLF), which exists under other names such as “holographic fan display” etc. A HLF consists of an axis driven by an electrical motor, on which at least one arm with a 1-dimensional LED stripe is mounted. Like a fan, but the blades are LED stripes instead. When the fan is turned off, but the LED is on, you see a 1D stripe of lights. When you turn on the fan and it spins sufficiently fast, because of the retinal persistence of our eyes, the LEDs are blurred by motion through space and we see a floating 2D image instead of the 1D stripes.

If you work with wireless communications these 1D LED stripes could induce some déjà vu: The LED elements look quite like planar antenna elements and the arms of the HLF look like a uniform linear array (ULA). My question is: Is there an equivalent effect of spinning a ULA sufficiently fast?

You can also call it the “propeller” or “windmill” or “fan” antenna, if that is more amusing!

Can we transmit more information if an antenna is spun sufficiently fast? Clearly, spinning the LEDs creates a new degree of freedom (angle) which can encode a 2D image.

To keep it simple I will consider block fading, and my idea is that “sufficiently fast” is at least 2 revolutions per channel coherence time. I doubt this is fast enough to cause any effect of persistence in the antennas, but it can still lead to interesting questions. I start with a conceptual description first and then a toy example.

Conceptual Description

Under the idealistic assumptions of block fading, the idea is to transmit or receive pilots the first revolution, then to transmit or receive data over the same channel instance the second revolution. In this way, you increase the channel diversity in a coherence block, but the trade-off is lower SNR. Of course, if you have multiple antennas symmetrically placed on the disc (as in the figure below) you can spin half a revolution and then transmit half a revolution, which would allow for lower RPM. Compared to the “windmill” above, I suggest a slightly more feasible construction below:

ULA on a spinning disc, with symmetrically placed antenna elements.

In this construction I envision 5 main components:

  • A circular disc.
  • At least 1 antenna on the disc. In the illustration there are 6 antennas placed symmetrically, 3 on each side of the disc.
  • A way to spin the disc extremely fast. Maybe in a capsule of vacuum and some maglev technique? See also flywheel energy storage.
  • A way to read and write the signals of the antennas on the spinning disc.
  • A way to power the antennas on the spinning disc.

It has been demonstrated that one can spin a compact disc (CD) up to 25 000 RPM, which is 2.4 milliseconds per revolution. I will not delve into the physics of spinning discs for now and instead refer to the MythBusters as evidence:

Toy Example

Assume Rayleigh block fading, coherence time is 2T discrete symbols, data and additive noise are normalized complex Gaussian: x\sim\mathcal{CN}(0,1), w_t\sim\mathcal{CN}(0,1), g\sim\mathcal{CN}(0,\beta). We consider a single antenna and compare the HSA spinning case against the typical static case.

Note that the toy example is similar to a switched array, but the spinning antenna is fundamentally different in that you can also transmit/receive continuously in space and not just at the discrete antenna locations in a switched array.

Static antenna

In the static case we have a single antenna fixed in space, depicted below.

Static planar antenna element.

At t=1 the channel g is estimated perfectly. At remaining 2T-1 symbols you transmit/receive the same symbol of data x over the same channel instance g:

(1)   \begin{equation*}y_t = gx + w_t\end{equation*}

s.t.

(2)   \begin{equation*}\mathbf{y} = \mathbf{g}x + \mathbf{w}\sim\mathcal{CN}\left(\mathbf{g}x,\mathbf{I}\right), \mathbf{g}=\mathbf{1}g\in \mathbb{C}^{2T-1},\end{equation*}

where \mathbf{w}=[w_1,\dots,w_{2T}]^T, \mathbf{1}=[1,\dots,1]^T. Then apply MR combining with \mathbf{a} = \frac{\mathbf{g}}{|\mathbf{g}|^2} to the input

(3)   \begin{equation*}\mathbf{a}^H\mathbf{y} = x + \mathbf{a}^H\mathbf{w},\end{equation*}

where

(4)   \begin{equation*}\mathbf{a}^H\mathbf{w} \sim \mathcal{CN}(0, \mathbf{a}^H\mathbf{a}) = \mathcal{CN}\left(0, \frac{1}{(2T-1)|g|^2}\right),\end{equation*}

giving rate

(5)   \begin{equation*}R = \text{log}_2\left(1 + (2T-1)|g|^2\right),\end{equation*}

and outage probability

(6)   \begin{equation*}P(R<r) = P\left(\mathcal{X}_2^2 < \frac{2^r - 1}{\beta/2(2T-1)}\right),\end{equation*}

which has a closed form expression, excluded here.

Spinning antenna

In the spinning case we have an antenna on a rotating disc. For the first T symbols you estimate T independent channel instances perfectly. The last T symbols you transmit/receive the same symbol of data over each channel.

The disc takes T symbols per revolution.

Note that the assumption of T independent channels necessitates that the points of measurement are separated by \lambda/2, which in turn means the maximum wavelength is two times the disc diameter. In the spinning case the channel g is new for each of the final T measurements:

(7)   \begin{equation*}y_t = g_tx + w_t.\end{equation*}

With MR combining as previously, one gets

(8)   \begin{equation*}\mathbf{a}^H\mathbf{w} \sim  \mathcal{CN}\left(0, \frac{1}{\sum_{t=1}^T|g_t|^2}\right),\end{equation*}

with rate

(9)   \begin{equation*}R = \text{log}_2\left(1 + \sum_{t=1}^T|g_t|^2\right),\end{equation*}

and outage probability

(10)   \begin{equation*}P(R<r) = P\left(\mathcal{X}_{2T}^2 < \frac{2^r - 1}{\beta/2}\right).\end{equation*}

Simulation results

I simulate \mathbb{E}[R] using 1000 monte-carlo trials, and compute the probability of outage P(R<1) numerically with T=10. We see that the spinning antenna gets a much lower probability of outage for r=1 than the static if \beta is sufficiently high. However, the static has a higher expected rate, although with higher variance in rate than the spinning antenna.

Probability of outage against beta. Beta is also the SNR.
Expected(average) rate against beta. Beta is also the SNR. I use 1000 trials.

The take-away is that the static antenna has a higher rate on average, but if you want stronger guarantees on your rate the spinning antenna could be a better choice.

Multi-antenna case

I believe you can find additional interesting things you if you assume two symmetrically placed antennas, comparing the static and spinning case. For example, in the spinning case you can effectively beamform in 2 dimensions using only 2 antennas, which you cannot do in the static case.

Optimal power control

If average power is limited per coherence block, in the spinning case you should get some water filling problem where you want to spend most power on the best channels, which you wont get in the static case.

Summary

I propose the “Holographic Spinning Antenna”, which is an antenna that spins extremely fast (compared to traditional spinning antennas such as radar). From my minimal toy example and simulation results I suggest that the spinning antenna has a lower probability of outage, but not necessarily a higher ergodic rate compared to the static antenna. The HSA could be physically possible if it is the size of a CD and could be used for wavelengths up to 120*2 millimeters.

I would say there are two main questions to be answered:

  • Does the HSA have any significant benefit compared to a static/switched array motivating the added physical complexity of spinning the antenna?
  • Is the HSA physically possible?

I encourage anyone curious enough to keep exploring the concept of a spinning antenna under better assumptions and models.

Finally, thanks to Emil Björnson and my main supervisor Erik G. Larsson for letting me post this in the blog, and for letting me entertain my idea of a spinning antenna.

Have a great new year,

Martin Dahl,
PhD Student at Linköping University,
Department of Electrical Engineering (ISY),
Division of Communication Systems (CommSys),
martin.dahl@liu.se

Spinning antennas vs Static antennas. Not my figure, found it online!

Episode 41: 6G in the Upper Mid-Band

We have now released the 41st episode of the podcast Wireless Future. It has the following abstract:

New cellular network deployments are often associated with new frequency bands. 6G will likely use the upper mid-band from 7-24 GHz. It is called the “golden band” since it provides more spectrum than in current 5G networks and features decent propagation conditions. In this episode, Erik G. Larsson and Emil Björnson discuss the coexistence issues that must be overcome when operating in this band and how much spectrum we can expect to utilize. The future role of multi-antenna technology and its associated challenges are detailed, including the emerging “Gigantic MIMO” term. The prospects of exploiting near-field propagation effects in 6G and the road towards distributed cell-free MIMO are also covered. You can read Emil’s paper about Gigantic MIMO and Nokia’s white paper about coverage evaluation.

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

News – commentary – mythbusting