Category Archives: Beyond 5G

Revitalizing the Research on Wireless Communications in a New Decade

In the last decade, the research on wireless communications has been strongly focused on the development of 5G. Plenty of papers have started with sentences of the kind: “We consider X, which is a promising method that can greatly improve Y in 5G.” For example, X might be Massive MIMO and Y might be the spectral efficiency. We now know which physical-layer methods made it into the first release of the 5G standard, and which did not. It remains to be seen which methods will actually be used in practice and how large performance improvements 5G can deliver.

There is no doubt that the 5G research has been successful. However, it remains is to improve the developed methods to bridge the gap between the simplifying models and assumptions considered in academia and the practical conditions faced by the industry. Although new scientific papers appear on arXiv.org almost every day, few of them focus on these practically important aspects of the 5G development. Instead, minor variations on well-studied problems dominate and the models are the same simplified ones as ten years ago. We seem to be stuck in doing the same things that led to important advances at the beginning of the 2010s, although we have already solved most problems that can be solved using such simple models. This is why I think we need to revitalize the research!

Two concrete examples

The following two examples explain what I mean.

Example 1: Why would we need more papers on Massive MIMO with uncorrelated Rayleigh fading channels and maximum ratio (MR) processing? We already know that practical channels are spatially correlated and other processing methods are vastly superior to MR while having practically affordable complexity.

Example 2: Why would we need more papers on hybrid beamforming design for flat-fading channels? We already know that the hybrid architecture is only meaningful in wideband mmWave communications, in which case the channels feature frequency-selective fading. The generalization is non-trivial since it is mainly under frequency-selective conditions that the weaknesses/challenges of the hybrid approach appear.

I think that the above-mentioned simplifications were well motivated in the early 2010s when many of the seminal papers on Massive MIMO and mmWave communications appeared. It is usually easier to reach ambitious research goals by taking small steps towards them. It is acceptable to make strong simplifications in the first steps, to achieve the analytical tractability needed to develop a basic intuition and understanding. The later steps should, however, gradually move toward more realistic assumptions that also makes the analysis less tractable. We must continuously question if the initial insights apply also under more practical conditions or if they were artifacts of the initial simplifications.

Unfortunately, this happened far too seldom in the last decade. Our research community tends to prioritize analytical tractability over realistic models. If a model has been used in prior work, it can often be reused in new papers without being questioned by the reviewers. When I review a paper and question the system model, the authors usually respond with a list of previous papers that use the same model, rather than the theoretical motivation that I would like to see.

It seems to be far easier to publish papers with simple models that enable derivation of analytical “closed-form” expressions and development of “optimal” algorithms, than to tackle more realistic but challenging models where these things cannot be established. The two examples above are symptoms of this problem. We cannot continue in this way if we want to keep the research relevant in this new decade. Massive MIMO and mmWave communications will soon be mainstream technologies!

Entering a new decade

The start of the 2020s is a good time for the research community to start over and think big. Massive MIMO was proposed in a paper from 2010 and initially seemed too good to be true, possibly due to the simplicity of the models used in the early works. In a paper that appeared in 2015, we identified ten “myths” that had flourished when people with a negative attitude against the technology tried to pinpoint why it wouldn’t work in practice. Today – a decade after its inception – Massive MIMO is a key 5G technology and has even become a marketing term used by cellular operators. The US operator Sprint has reported that the first generation of Massive MIMO base stations improve the spectral efficiency by around 10x in their real networks.

I believe the history will repeat itself during this decade. The research into the next big physical layer technology will take off this year – we just don’t know what it will be. There are already plenty of non-technical papers that try to make predictions, so the quota for such papers is already filled. I’ve written one myself entitled “Massive MIMO is a Reality – What is Next? Five Promising Research Directions for Antenna Arrays”. What we need now are visionary technical papers (like the seminal Massive MIMO paper by Marzetta) that demonstrate mathematically how a new technology can achieve ten-fold performance improvements over the state-of-the-art, for example, in terms of spectral efficiency, reliability, latency, or some other relevant metric. Maybe one or two of the research directions listed in my paper will be at the center of 6G. Much research work remains before we can know, thus this is the right time to explore a wide range of new ideas.

The start of a new decade is a good time for the research community to start over and to think big. Massive MIMO was proposed in a paper from 2010 and initially seemed too good to be true, possibly due to the simplicity of the system models used in the early works. In a paper that appeared in 2015, we identified ten “myths” that had flourished when people tried to pinpoint why Massive MIMO wouldn’t work in practice. Today – a decade after its inception – Massive MIMO is a key 5G technology that has even become a marketing term used by cellular operators. It has been shown to improve the spectral efficiency by 10x in real networks.

I believe that the same procedure will repeat itself during this decade. The research into the next big physical layer technology will take off this year – we just don’t know what it will be. There are already plenty of non-technical papers that try to make predictions, so that quota has already been filled. I’ve written one myself entitled “Massive MIMO is a Reality – What is Next? Five Promising Research Directions for Antenna Arrays”. However, what we really need is visionary technical papers (like the seminal Massive MIMO paper by Marzetta) that demonstrate how we can actually achieve, say, ten-fold performance improvements over the state-of-the-art, concerning spectral efficiency, reliability, latency, or some other relevant metric. Maybe one or two of the research directions listed in my paper will become the main thing in 6G – much further work is needed before we can know.

Five ways to revitalize the research

To keep the wireless communication research relevant, we should stop considering minor variations on previously solved problems and instead focus either on implementation aspects of 5G or on basic research into entirely new methods that might eventually play a role in 6G. In both cases, I have the following five recommendations for how we can conduct more efficient and relevant research in this new decades.

1. We may start the research on new topics by using simplified models that are analytically tractable, but we must not get stuck in using those models. A beautiful analysis obtained with an unrealistic model might cause more confusion than it generates new practical insights. Just remember how a lot of Massive MIMO research focused on the pilot contamination problem, just because it happened to be the asymptotically limiting factor when using simplified models, while it is not the case in general.

2. We must be more respectful towards the underlying physics, particularly, electromagnetic theory. We cannot continue normalizing the pathloss variables or guesstimate how they can be computed. When developing a new technology, we must first get the basic models right. Otherwise we risk making fundamental mistakes and – even worse – trick others into repeating those mistakes for years to come. I covered the danger of normalization in a previous blog post.

3. We must not forget about previous methods when evaluating new methods but think carefully about what the true state-of-the-art is. For example, if we want to improve the performance of a cellular network by adding new equipment, we must compare it to existing equipment that could alternatively been added. For example, I covered the importance of comparing intelligent reflecting surfaces with relays in a previous blog post.

4. We must make sure new algorithms are reproducible and easily comparable, so that every paper is making useful progress. This can be achieved by publishing simulation code alongside papers and evaluating new algorithms in the same setup as previous algorithms. We might take inspiration from the machine learning field where ImageNet is a common benchmark.

5. We must not take the correctness of models and results in published papers for granted. This is particularly important nowadays when new IEEE journals with very short review times are getting traction. Few scientific experts can promise to make a proper review of a full-length paper in just seven days; thus, many reviews will be substandard. This is a step in the wrong direction and can severely reduce the quality and trustworthiness of published papers.

Let us all make an effort to revitalize the research methodology and selection of research problems to solve in the 2020s. If you have further ideas please share them in the comment field!

Dynamic Cooperation Clusters

By deploying many distributed antennas instead of a few multi-antenna base stations, a more uniform communication performance can be achieved over a coverage area. The peak rates might go down but there is a much higher chance of getting a decent rate with 95% probability. This is the main motivation behind Cell-free Massive MIMO, which is the new name for Network MIMO with a large number of antennas (many more than the number of users). The key difference from conventional ultra-dense networks is that the distributed antennas are cooperating to transmit phase-coherently in the downlink and process the received uplink signals coherently. One promising way to deploy these systems is by using radio stripes.

The first papers on Cell-free Massive MIMO assumed that all antennas have access to the downlink data of all users and take part in the uplink signal detection of all users. This is both impractical and unnecessary in a large network, where each user is only physically close to a subset of the antennas. Hence, it makes practical sense that only those antennas that can reach the user with a signal power that is non-negligible compared to the thermal noise should transmit to that user and participate in the detection of its uplink data. 

I designed a framework for this 10 years ago, which I called “dynamic cooperation clusters” (DCC) and it can be readily applied to Cell-free Massive MIMO. The main idea was that every user selects which antennas should serve it in a user-centric manner, which means that any antenna subset can be selected. This stands in contrast to the conventional network-centric approach (which dominated the 4G CoMP literature) where only certain predefined disjoint groups of antennas are allowed to cooperate.

Although the DCC framework is a perfect fit for Cell-free Massive MIMO, the performance analysis that we did 10 years ago was admittedly simplified compared to what is possible with the latest methodology. We considered TDD systems that utilize reciprocity but assumed slowly fading channels that can be estimated without error, thereby avoiding pilot contamination and the computation of ergodic rates. To provide a bridge to the past, we wrote the paper “Scalable Cell-Free Massive MIMO Systems” which revisits the DCC framework in the context of Cell-free Massive MIMO, using the latest analytical methods from the Massive MIMO literature. 

Most importantly, the new paper provides an intuitive way to select the user-centric cooperation clusters based on the uplink pilot transmissions. When a user connects to the network, we suggest that the antenna with the best channel condition is given the responsibility to guarantee the user service. The user is assigned to the pilot that is least affected by pilot contamination in that particular region. Moreover, all antennas serve as many users as there are pilots; at most one user per pilot to limit the negative effect of pilot contamination. Under these assumptions, we show that the users get nearly the same rates as if all the antennas serve all users, but with greatly reduced complexity and fronthaul requirements. In conclusion, scalable and well-performing implementations of Cell-free Massive MIMO are possible!

The following video explains the main ideas:

Cell-free Massive MIMO and Radio Stripes

I have recorded a popular science video that explains how a cell-free network architecture can provide major performance improvements over 5G cellular networks, and why radio stripes is a promising way to implement it:

If you want more technical details, I recommend our recent survey paper “Ubiquitous Cell-Free Massive MIMO Communications“. One of the authors, Dr. Hien Quoc Ngo at Queen’s University Belfast, has created a blog about Cell-free Massive MIMO. In particular, it contains a list of papers on the topic and links to the programming code of some of them.

Centralized Versus Distributed Processing in Cell-Free Massive MIMO

A figure from my first paper on Network MIMO, which is nowadays called Cell-Free Massive MIMO.

The new Cell-Free Massive MIMO concept has its roots in the classical Network MIMO concept, and has also been given many other names over the years (e.g., coordinated multipoint). When I started my research on the topic in 2009, the standard assumption was that a set of base stations were jointly transmitting to a set of users by sharing both the data signals and their respective channel state information (CSI). In my first journal paper, we showed that one can get away with only sharing the data signals between the base stations because each one only needs local CSI (between itself and the users) to beamform to the users. The price to pay is that the base stations cannot cancel each others’ interference, so each one should preferably have multiple antennas so it can control how much interference it causes. This was my first well-cited paper but, to be honest, I am still not sure how significant results are.

On the one hand, it is very convenient to only utilize local CSI at every base station, because it can be estimated from uplink pilots in a TDD system, which was a key motivation behind our 2010 paper. The time-critical precoding computation can then be initiated immediately after the pilots have been received, instead of waiting for the CSI to be shared between the base stations. This property was later utilized in the first Cell-Free Massive MIMO papers [Ngo, Nayebi] to alleviate the need for sharing CSI.

On the other hand, CSI is usually a small fraction of the signaling between a base station and the rest of the system in Network MIMO. The majority of the signaling consists of the data signals; for example, if a coherence block with 200 channel uses consists of 20 pilot symbols and 180 data symbols, then there is 180/20 = 9 times more data than CSI. Interestingly, our recent paper “Making Cell-Free Massive MIMO Competitive With MMSE Processing and Centralized Implementation” shows that if Cell-Free Massive MIMO is implemented by sending all CSI to an edge-cloud processor that takes care of all the signal processing, both the communication performance and the signaling load can be greatly improved as compared to the fully distributed approach (which was considered in my 2010 paper and then became the standard assumption in the Cell-Free Massive MIMO literature).

The bottom line is that it is hard to make a distributed network implementation competitive compared to a centralized one. Unless we can find a really clever implementation, there is a risk that we lose too much in communication performance and also raise the fronthaul capacity requirements.

Multiple Antenna Technologies for Beyond 5G

I am one of the guest editors of the JSAC special issue on “Multiple Antenna Technologies for Beyond 5G” which had its submission deadline on October 1. We received 133 submissions that span emerging topics such as Cell-free Massive MIMO, intelligent reflective surfaces, terahertz communications, new hardware architectures (e.g., lens arrays), and index modulation. It will take a lot of hard work to review all these submissions, but I am convinced that the selected papers will be of high quality and present a range of interesting concepts that can be utilized in beyond 5G systems.

In addition to the technical papers, the guest editors have also written a survey paper that has the same name as the special issue. A draft of it is available on arXiv. This paper describes the state-of-the-art and open problems related to several of the topics described above.

Scalable Cell-Free Massive MIMO

Cell-free massive MIMO is likely one of the technologies that will form the backbone of any xG with x>5. What distinguishes cell-free massive MIMO from distributed MIMO, network MIMO or cooperative multi-point (CoMP)? The short answer is that cell-free massive MIMO works, it can deliver uniformly good service throughout the coverage area, and it requires no prior knowledge of short-term CSI (just like regular cellular massive MIMO). A longer answer is here. The price to pay for this superiority, no shock, is the lack of scalability: for canonical cell-free massive MIMO there is a practical limit on how large the system can be, and this scalability concerns both the power control, the signal processing, and the organization of the backhaul.

At ICC this year we presented this approach towards scalable cell-free massive MIMO. A key insight is that power control is extremely vital for performance, and a scalable cell-free massive MIMO solution requires a scalable power control policy. No surprise, some performance must be sacrificed relative to canonical cell-free massive MIMO. Coincidentally, another paper in the same session (WC-26) also devised a power control policy with similar qualities!

Take-away point? There are only three things that matter for the design of cell-free massive MIMO signal processing algorithms and power control policies: scalability, scalability and scalability…

Reconfigurable Reflectarrays and Metasurfaces

In the research on Beyond Massive MIMO systems, a number of new terminologies have been introduced with names such as:

  1. Reconfigurable reflectarrays;
  2. Software-controlled metasurfaces;
  3. Intelligent reflective surfaces.

These are basically the same things (and there are many variations on the three names), which is important to recognize so that the research community can analyze them in a joint manner.

Background

The main concept has its origin in reflectarray antennas, which is a class of directive antennas that behave a bit like parabolic reflectors but can be deployed on a flat surface, such as a wall. More precisely, a reflectarray antenna takes an incoming signal wave and reflects it into a predetermined spatial direction, as illustrated in the following figure:

Figure 1: A reflectarray antenna (also known as metasurface and intelligent reflective surface) takes an incoming wave and reflects it as a beam in a particular direction (or towards a spatial point).

Instead of relying on the physical shape of the antenna to determine the reflective properties (as is the case for parabolic reflectors), a reflectarray consists of many reflective elements that impose element-specific time delays to their reflected signals. These elements are illustrated by the dots on the surface in Figure 1. In this way, the reflected wave is beamformed and the reflectarray can be viewed as a passive MIMO array. The word passive refers to the fact that the radio signal is not generated in the array but elsewhere. Since a given time delay corresponds to a different phase shift depending on the signal’s frequency, reflectarrays are primarily useful for reflecting narrowband signals in a single direction.

Reconfigurability

Reconfigurable reflectarrays can change the time delays of each element to steer the reflected beam in different directions at different points in time. The research on this topic has been going on for decades; the book “Reflectarray Antennas: Analysis, Design, Fabrication, and Measurement” from 2014 by Shaker et al. describes many different implementation concepts and experiments.

Recently, there is a growing interest in reconfigurable reflectarrays from the communication theoretic and signal processing community. This is demonstrated by a series of new overview papers that focus on applications rather than hardware implementations:

The elements in the reflecting surface in Figure 1 are called meta-atoms or reflective elements in these overview papers. The size of a meta-atom/element is smaller than the wavelength, just as for conventional low-gain antennas. In simple words, we can view an element as an antenna that captures a radio signal, keeps it inside itself for a short time period to create a time-delay, and then radiates the signal again. One can thus view it as a relay with a delayand-forward protocol. Even if the signals are not amplified by a reconfigurable reflectarray, there is a non-negligible energy consumption related to the control protocols and the reconfigurability of the elements.

It is important to distinguish between reflecting surfaces and the concept of large intelligent surfaces with active radio transmitters/receivers, which was proposed for data transmission and positioning by Hu, Rusek, and Edfors. This is basically an active MIMO array with densely deployed antennas.

What are the prospects of the technology?

The recent overview papers describe a number of potential use cases for reconfigurable reflectarrays (metasurfaces) in wireless networks, such as range extension, improved physical layer security, wireless power transfer, and spatial modulation. From a conceptual perspective, it is indeed an exciting prospect to build future networks where not only the transmitter and receiver algorithms can be optimized, but the propagation environment can be partially controlled.

However, the research on this topic is still in its infancy. It is of paramount importance to demonstrate practically important use cases where reconfigurable reflectarrays are fundamentally better than existing methods. If it should be economically feasible to turn the technology into a commercial reality, we should not look for use cases where a 10% gain can be achieved but rather a 10x or 100x gain. This is what Marzetta demonstrated with Massive MIMO and this is also what it can deliver in 5G.

I haven’t seen any convincing demonstrations of such use cases of reflectarray antennas (metasurfaces) thus far. On the contrary, my new paper “Intelligent Reflecting Surface vs. Decode-and-Forward: How Large Surfaces Are Needed to Beat Relaying?” shows that the new technology can indeed provide range extension, but a basic single-antenna decode-and-forward relay can outperform it unless the surface is very large. There is much left to do on this topic!