Intelligent Reflecting Surfaces: On Use Cases and Path Loss Model

Emerging intelligent reflecting surface (IRS) technology, also known under the names “reconfigurable intelligent surface” and “software-controlled metasurface”, is sometimes marketed as an enabling technology for 6G. How do they work, what are their use cases and how much will they improve wireless access performance at large?

The physical principle of an IRS is that the surface is composed of N atoms, each of which acts as an “intelligent” scatterer: a small antenna that receives and re-radiates without amplification, but with a controllable phase-shift. Typically, an atom is implemented as a small patch antenna terminated with an adjustable impedance. Assuming the phase shifts are properly adjusted, the N scattered wavefronts can be made to add up constructively at the receiver. If coupling between the atoms is neglected, the analysis of an IRS essentially entails (i) finding the Green’s function of the source (a sum of spherical waves if close, or a plane wave if far away), (ii) computing the impinging field at each atom, (iii) integrating this field over the surface of each atom to find a current density, (iv) computing the radiated field from each atom using physical-optics approximation, and (v) applying the superposition principle to find the field at the receiver. If the atoms are electrically small, one can approximate the re-radiated field by pretending the atoms are point sources and then the received “signal” is basically a superposition of phase-shifted (as  e^{jkr}), amplitude-scaled (as 1/r) source signals.

A point worth re-iterating is that an atom is a scatterer, not a “mirror”. A more subtle point is that the entire IRS as such, consisting of a collection of scatterers, is itself also a scatterer, not a mirror. “Mirrors” exist only in textbooks, when a plane wave is impinging onto an infinitely large conducting plate (none of which exist in practice). Irrespective of how the IRS is constructed, if it is viewed from far enough away, its radiated field will have a beamwidth that is inversely proportional to its size measured in wavelengths.

Two different operating regimes of IRSs can be distinguished:

1. Both transmitter and receiver are in the far-field of the surface. Then the waves seen at the surface can be approximated as planar; the phase differential from the surface center to its edge is less than a few degrees, say. In this case the phase shifts applied to each atom should be linear in the surface coordinate. The foreseen use case would be to improve coverage, or provide an extra path to improve the rank of a point-to-point MIMO channel. Unfortunately in this case the transmitter-IRS-path loss scales very unfavorably, as (N/(r_1r_2))^2 where N is the number of meta-atoms in the surface, and the reason is that again, the IRS itself acts as a (large) scatterer, not a “mirror”. Therefore the IRS has to be quite large before it becomes competitive with a standard single-antenna decode-and-forward relay, a simple, well understood technology that can be implemented using already widely available components, at small power consumption and with a small form factor. (In addition, full-duplex technology is emerging and may eventually be combined with relaying, or even massive MIMO relaying.)

2. At least one of the transmitter and the receiver is in the surface near-field. Here the plane-wave approximation is no longer valid. The IRS can then either be sub-optimally configured to act as a “mirror”, in which case the phase shifts vary linearly as function of the surface coordinate. Alternatively, it can be configured to act as a “lens”, with optimized phase-shifts, which is typically better. As shown for example in this paper, in the near-field case the path loss scales more favorably than in the far-field case. The use cases for the near-field case are less obvious, but one can think of perhaps indoor environments with users close to the walls and every wall covered by an IRS. Another potential use case that I learned about recently is to use the IRS as a MIMO transmitter: a single-antenna transmitter near an IRS can be jointly configured to act as a MIMO beamforming array.

So how useful will IRS technology be in 6G? The question seems open. Indoor coverage in niche scenarios, but isn’t this an already solved problem? Outdoor coverage improvement, but then (cell-free) massive MIMO seems to be a much better option? Building MIMO transmitters from a single-antenna seems exciting, but is it better than using conventional RF? Perhaps it is for the Terahertz bands, where implementation of coherent MIMO may prove truly challenging, that IRS technology will be most beneficial.

A final point is that nothing requires the atoms in an IRS to be located adjacently to one another, or even to form a surface! But they are probably easier to coordinate if they are in more or less the same place.

Revitalizing the Research on Wireless Communications in a New Decade

In the last decade, the research on wireless communications has been strongly focused on the development of 5G. Plenty of papers have started with sentences of the kind: “We consider X, which is a promising method that can greatly improve Y in 5G.” For example, X might be Massive MIMO and Y might be the spectral efficiency. We now know which physical-layer methods made it into the first release of the 5G standard, and which did not. It remains to be seen which methods will actually be used in practice and how large performance improvements 5G can deliver.

There is no doubt that the 5G research has been successful. However, it remains is to improve the developed methods to bridge the gap between the simplifying models and assumptions considered in academia and the practical conditions faced by the industry. Although new scientific papers appear on arXiv.org almost every day, few of them focus on these practically important aspects of the 5G development. Instead, minor variations on well-studied problems dominate and the models are the same simplified ones as ten years ago. We seem to be stuck in doing the same things that led to important advances at the beginning of the 2010s, although we have already solved most problems that can be solved using such simple models. This is why I think we need to revitalize the research!

Two concrete examples

The following two examples explain what I mean.

Example 1: Why would we need more papers on Massive MIMO with uncorrelated Rayleigh fading channels and maximum ratio (MR) processing? We already know that practical channels are spatially correlated and other processing methods are vastly superior to MR while having practically affordable complexity.

Example 2: Why would we need more papers on hybrid beamforming design for flat-fading channels? We already know that the hybrid architecture is only meaningful in wideband mmWave communications, in which case the channels feature frequency-selective fading. The generalization is non-trivial since it is mainly under frequency-selective conditions that the weaknesses/challenges of the hybrid approach appear.

I think that the above-mentioned simplifications were well motivated in the early 2010s when many of the seminal papers on Massive MIMO and mmWave communications appeared. It is usually easier to reach ambitious research goals by taking small steps towards them. It is acceptable to make strong simplifications in the first steps, to achieve the analytical tractability needed to develop a basic intuition and understanding. The later steps should, however, gradually move toward more realistic assumptions that also makes the analysis less tractable. We must continuously question if the initial insights apply also under more practical conditions or if they were artifacts of the initial simplifications.

Unfortunately, this happened far too seldom in the last decade. Our research community tends to prioritize analytical tractability over realistic models. If a model has been used in prior work, it can often be reused in new papers without being questioned by the reviewers. When I review a paper and question the system model, the authors usually respond with a list of previous papers that use the same model, rather than the theoretical motivation that I would like to see.

It seems to be far easier to publish papers with simple models that enable derivation of analytical “closed-form” expressions and development of “optimal” algorithms, than to tackle more realistic but challenging models where these things cannot be established. The two examples above are symptoms of this problem. We cannot continue in this way if we want to keep the research relevant in this new decade. Massive MIMO and mmWave communications will soon be mainstream technologies!

Entering a new decade

The start of the 2020s is a good time for the research community to start over and think big. Massive MIMO was proposed in a paper from 2010 and initially seemed too good to be true, possibly due to the simplicity of the models used in the early works. In a paper that appeared in 2015, we identified ten “myths” that had flourished when people with a negative attitude against the technology tried to pinpoint why it wouldn’t work in practice. Today – a decade after its inception – Massive MIMO is a key 5G technology and has even become a marketing term used by cellular operators. The US operator Sprint has reported that the first generation of Massive MIMO base stations improve the spectral efficiency by around 10x in their real networks.

I believe the history will repeat itself during this decade. The research into the next big physical layer technology will take off this year – we just don’t know what it will be. There are already plenty of non-technical papers that try to make predictions, so the quota for such papers is already filled. I’ve written one myself entitled “Massive MIMO is a Reality – What is Next? Five Promising Research Directions for Antenna Arrays”. What we need now are visionary technical papers (like the seminal Massive MIMO paper by Marzetta) that demonstrate mathematically how a new technology can achieve ten-fold performance improvements over the state-of-the-art, for example, in terms of spectral efficiency, reliability, latency, or some other relevant metric. Maybe one or two of the research directions listed in my paper will be at the center of 6G. Much research work remains before we can know, thus this is the right time to explore a wide range of new ideas.

The start of a new decade is a good time for the research community to start over and to think big. Massive MIMO was proposed in a paper from 2010 and initially seemed too good to be true, possibly due to the simplicity of the system models used in the early works. In a paper that appeared in 2015, we identified ten “myths” that had flourished when people tried to pinpoint why Massive MIMO wouldn’t work in practice. Today – a decade after its inception – Massive MIMO is a key 5G technology that has even become a marketing term used by cellular operators. It has been shown to improve the spectral efficiency by 10x in real networks.

I believe that the same procedure will repeat itself during this decade. The research into the next big physical layer technology will take off this year – we just don’t know what it will be. There are already plenty of non-technical papers that try to make predictions, so that quota has already been filled. I’ve written one myself entitled “Massive MIMO is a Reality – What is Next? Five Promising Research Directions for Antenna Arrays”. However, what we really need is visionary technical papers (like the seminal Massive MIMO paper by Marzetta) that demonstrate how we can actually achieve, say, ten-fold performance improvements over the state-of-the-art, concerning spectral efficiency, reliability, latency, or some other relevant metric. Maybe one or two of the research directions listed in my paper will become the main thing in 6G – much further work is needed before we can know.

Five ways to revitalize the research

To keep the wireless communication research relevant, we should stop considering minor variations on previously solved problems and instead focus either on implementation aspects of 5G or on basic research into entirely new methods that might eventually play a role in 6G. In both cases, I have the following five recommendations for how we can conduct more efficient and relevant research in this new decades.

1. We may start the research on new topics by using simplified models that are analytically tractable, but we must not get stuck in using those models. A beautiful analysis obtained with an unrealistic model might cause more confusion than it generates new practical insights. Just remember how a lot of Massive MIMO research focused on the pilot contamination problem, just because it happened to be the asymptotically limiting factor when using simplified models, while it is not the case in general.

2. We must be more respectful towards the underlying physics, particularly, electromagnetic theory. We cannot continue normalizing the pathloss variables or guesstimate how they can be computed. When developing a new technology, we must first get the basic models right. Otherwise we risk making fundamental mistakes and – even worse – trick others into repeating those mistakes for years to come. I covered the danger of normalization in a previous blog post.

3. We must not forget about previous methods when evaluating new methods but think carefully about what the true state-of-the-art is. For example, if we want to improve the performance of a cellular network by adding new equipment, we must compare it to existing equipment that could alternatively been added. For example, I covered the importance of comparing intelligent reflecting surfaces with relays in a previous blog post.

4. We must make sure new algorithms are reproducible and easily comparable, so that every paper is making useful progress. This can be achieved by publishing simulation code alongside papers and evaluating new algorithms in the same setup as previous algorithms. We might take inspiration from the machine learning field where ImageNet is a common benchmark.

5. We must not take the correctness of models and results in published papers for granted. This is particularly important nowadays when new IEEE journals with very short review times are getting traction. Few scientific experts can promise to make a proper review of a full-length paper in just seven days; thus, many reviews will be substandard. This is a step in the wrong direction and can severely reduce the quality and trustworthiness of published papers.

Let us all make an effort to revitalize the research methodology and selection of research problems to solve in the 2020s. If you have further ideas please share them in the comment field!

Dynamic Cooperation Clusters

By deploying many distributed antennas instead of a few multi-antenna base stations, a more uniform communication performance can be achieved over a coverage area. The peak rates might go down but there is a much higher chance of getting a decent rate with 95% probability. This is the main motivation behind Cell-free Massive MIMO, which is the new name for Network MIMO with a large number of antennas (many more than the number of users). The key difference from conventional ultra-dense networks is that the distributed antennas are cooperating to transmit phase-coherently in the downlink and process the received uplink signals coherently. One promising way to deploy these systems is by using radio stripes.

The first papers on Cell-free Massive MIMO assumed that all antennas have access to the downlink data of all users and take part in the uplink signal detection of all users. This is both impractical and unnecessary in a large network, where each user is only physically close to a subset of the antennas. Hence, it makes practical sense that only those antennas that can reach the user with a signal power that is non-negligible compared to the thermal noise should transmit to that user and participate in the detection of its uplink data. 

I designed a framework for this 10 years ago, which I called “dynamic cooperation clusters” (DCC) and it can be readily applied to Cell-free Massive MIMO. The main idea was that every user selects which antennas should serve it in a user-centric manner, which means that any antenna subset can be selected. This stands in contrast to the conventional network-centric approach (which dominated the 4G CoMP literature) where only certain predefined disjoint groups of antennas are allowed to cooperate.

Although the DCC framework is a perfect fit for Cell-free Massive MIMO, the performance analysis that we did 10 years ago was admittedly simplified compared to what is possible with the latest methodology. We considered TDD systems that utilize reciprocity but assumed slowly fading channels that can be estimated without error, thereby avoiding pilot contamination and the computation of ergodic rates. To provide a bridge to the past, we wrote the paper “Scalable Cell-Free Massive MIMO Systems” which revisits the DCC framework in the context of Cell-free Massive MIMO, using the latest analytical methods from the Massive MIMO literature. 

Most importantly, the new paper provides an intuitive way to select the user-centric cooperation clusters based on the uplink pilot transmissions. When a user connects to the network, we suggest that the antenna with the best channel condition is given the responsibility to guarantee the user service. The user is assigned to the pilot that is least affected by pilot contamination in that particular region. Moreover, all antennas serve as many users as there are pilots; at most one user per pilot to limit the negative effect of pilot contamination. Under these assumptions, we show that the users get nearly the same rates as if all the antennas serve all users, but with greatly reduced complexity and fronthaul requirements. In conclusion, scalable and well-performing implementations of Cell-free Massive MIMO are possible!

The following video explains the main ideas:

Cell-free Massive MIMO and Radio Stripes

I have recorded a popular science video that explains how a cell-free network architecture can provide major performance improvements over 5G cellular networks, and why radio stripes is a promising way to implement it:

If you want more technical details, I recommend our recent survey paper “Ubiquitous Cell-Free Massive MIMO Communications“. One of the authors, Dr. Hien Quoc Ngo at Queen’s University Belfast, has created a blog about Cell-free Massive MIMO. In particular, it contains a list of papers on the topic and links to the programming code of some of them.

Centralized Versus Distributed Processing in Cell-Free Massive MIMO

A figure from my first paper on Network MIMO, which is nowadays called Cell-Free Massive MIMO.

The new Cell-Free Massive MIMO concept has its roots in the classical Network MIMO concept, and has also been given many other names over the years (e.g., coordinated multipoint). When I started my research on the topic in 2009, the standard assumption was that a set of base stations were jointly transmitting to a set of users by sharing both the data signals and their respective channel state information (CSI). In my first journal paper, we showed that one can get away with only sharing the data signals between the base stations because each one only needs local CSI (between itself and the users) to beamform to the users. The price to pay is that the base stations cannot cancel each others’ interference, so each one should preferably have multiple antennas so it can control how much interference it causes. This was my first well-cited paper but, to be honest, I am still not sure how significant results are.

On the one hand, it is very convenient to only utilize local CSI at every base station, because it can be estimated from uplink pilots in a TDD system, which was a key motivation behind our 2010 paper. The time-critical precoding computation can then be initiated immediately after the pilots have been received, instead of waiting for the CSI to be shared between the base stations. This property was later utilized in the first Cell-Free Massive MIMO papers [Ngo, Nayebi] to alleviate the need for sharing CSI.

On the other hand, CSI is usually a small fraction of the signaling between a base station and the rest of the system in Network MIMO. The majority of the signaling consists of the data signals; for example, if a coherence block with 200 channel uses consists of 20 pilot symbols and 180 data symbols, then there is 180/20 = 9 times more data than CSI. Interestingly, our recent paper “Making Cell-Free Massive MIMO Competitive With MMSE Processing and Centralized Implementation” shows that if Cell-Free Massive MIMO is implemented by sending all CSI to an edge-cloud processor that takes care of all the signal processing, both the communication performance and the signaling load can be greatly improved as compared to the fully distributed approach (which was considered in my 2010 paper and then became the standard assumption in the Cell-Free Massive MIMO literature).

The bottom line is that it is hard to make a distributed network implementation competitive compared to a centralized one. Unless we can find a really clever implementation, there is a risk that we lose too much in communication performance and also raise the fronthaul capacity requirements.

Is the Pathloss Larger at mmWave Frequencies?

The range of mmWave communication signals is often said to be lower than for signals in the conventional sub-6 GHz bands. This is usually also the case but the reason for it might not be the one that you think. I will explain what I mean in this blog post.

If one takes a look at the classical free-space pathloss formula, the received power P_r is

(1)   \begin{equation*}P_r = P_t \left( \frac{\lambda}{4\pi d} \right)^2,\end{equation*}

where the transmit power is denoted by P_t, the wavelength is \lambda, and the propagation distance is d. This formula shows that the received power is proportional to the wavelength and, thus, will be smaller when we increase the carrier frequency; that is, the received power is lower at 60 GHz (\lambda=5 mm) than at 3 GHz (\lambda=10 cm). But there is an important catch: the dependence on \lambda is due to the underlying assumption of having a receive antenna with the effective area

(2)   \begin{equation*}A = \frac{\lambda^2}{4\pi}.\end{equation*}

Hence, if we consider a receive antenna with arbitrary effective area A, we can instead write the received signal in (1) as

(3)   \begin{equation*}P_r = P_t  \frac{A}{4\pi d^2},\end{equation*}

which is frequency-independent as long as we keep the antenna area A fixed as we change the carrier frequency. Since the area of a fixed-gain antenna actually is proportional to \lambda^2, as exemplified in (2), in practice we will need to use arrays of multiple antennas in mmWave bands to achieve the same total antenna area A as in lower bands. This is what is normally done in mmWave communications for cellular networks, while a single high-gain antenna with large area can be used for fixed links (e.g., backhaul between base stations or between a satellite and ground station). As explained in Section 7.5 of Massive MIMO Networks, one can actually play with the antenna areas at both the transmitter and receiver to keep the same pathloss in the mmWave bands, while actually reducing the total antenna area!

So why is the signal range shorter in mmWave bands?

The main reasons for the shorter range are:

  • Larger propagation losses in non-line-of-sight scenarios, for example, due to less scattering (fewer propagation paths) and larger penetration losses.
  • The use more bandwidth, which leads to lower SNR.

Multiple Antenna Technologies for Beyond 5G

I am one of the guest editors of the JSAC special issue on “Multiple Antenna Technologies for Beyond 5G” which had its submission deadline on October 1. We received 133 submissions that span emerging topics such as Cell-free Massive MIMO, intelligent reflective surfaces, terahertz communications, new hardware architectures (e.g., lens arrays), and index modulation. It will take a lot of hard work to review all these submissions, but I am convinced that the selected papers will be of high quality and present a range of interesting concepts that can be utilized in beyond 5G systems.

In addition to the technical papers, the guest editors have also written a survey paper that has the same name as the special issue. A draft of it is available on arXiv. This paper describes the state-of-the-art and open problems related to several of the topics described above.

News – commentary – mythbusting