Category Archives: Commentary

Beyond the Cellular Paradigm: Cell-Free Architectures with Radio Stripes

I just finished giving an IEEE Future Networks Webinar on the topic of Cell-free Massive MIMO and radio stripes. The webinar is more technical than my previous popular-science video on the topic, but it can anyway be considered an overview on the basics and the implementation of the technology using radio stripes.

If you missed the chance to view the webinar live, you can access the recording and slides afterwards by following this link. The recording contains 42 minutes of presentation and 18 minutes of Q/A session. If your question was not answered during the session, please feel to ask it here on the blog instead.

Rician Fading – a Channel Model Often Misunderstood

Line-of-sight channels normally contain many propagation paths, whereof one is the direct path and the others are paths were the signals are scattered on different objects. The interaction between these paths lead to fading phenomena, which is often modeled statistically using Rician fading (sometimes written as Ricean fading). The main assumption is that the complex-valued channel coefficient h \in \mathbb{C} in the complex baseband can be divided into two parts:

h = m e^{j\theta}+ s,

where m \geq 0 is the magnitude of the direct path between the transmitter and receiver and \theta \in [0,2\pi] is the corresponding phase shift. The second part, s, represents all the scattered paths. This part is separated from the direct path since it consists of many paths, each being of roughly the same strength but substantially weaker than the direct path. It is modeled by Rayleigh fading, which implies s\sim \mathcal{CN}(0,\sigma^2). The complex Gaussian distribution is motivated by the central limit theorem, which says that the sum of many independent and identically distributed random variables is approximately Gaussian.

Under these assumptions, the magnitude |h|=|m e^{j\theta} +s| of the channel coefficient is Rice/Rician distributed, which is why it is called Rician fading. More precisely, |h| \sim \mathrm{Rice}(m,\sqrt{\sigma^2/2}), which depends on the magnitude m and the variance \sigma^2 of the scattering.

Interestingly, the distribution does not depend on the phase \theta, because the magnitude removes phases and s and s e^{j\theta} are equally distributed. Hence, it is common to omit \theta in the performance analysis of Rician fading channels. As long as the channel is perfectly known at the receiver, it will not make any difference when quantifying the SNR or capacity.

The common misunderstanding

We cannot neglect the phase \theta when analyzing practical systems where the receiver needs to estimate the channel. The value of \theta varies at the same pace as s, and for exactly the same reason: The transmitter or receiver moves, which induces small phase shifts in every path. Since s contains a large number of paths with approximately the same magnitude but random phases, the sum of the many terms with random phases give rise to the Gaussian distribution. The phase-shift of the direct path must be treated separately since this path is substantially stronger.

Unfortunately, my experience is that the vast majority of paper on Rician fading channels ignores this fact by simply treating m e^{j\theta} as a deterministic constant that is perfectly known at the receiver. I have done this myself in several papers, including this one from 2010 that has received 200+ citations. Unfortunately, the results obtained with that simplified model are practically questionable. If we don’t know s in advance, how can we know \theta? At best, the results obtained with a perfectly known \theta can be interpreted as an upper bound on what is practically achievable.

We analyzed the importance of correctly modeled random phases in a recent paper on cell-free massive MIMO. We compared the performance when using an ideal phase-aware MMSE estimator and a phase-unaware LMMSE estimator. The spectral efficiency loss due to a lack of knowing \theta ranges from 2% to 50% in different simulations, depending on the pilot length and interference situation. Hence, there are cases where it is very important to know the phase correctly.

Infinitely Large Arrays?

A defining factor for the early Massive MIMO literature is the study of communication systems where the number of base station antennas M goes to infinity. Marzetta’s original paper considered this asymptotic regime from the outset. Many other papers derived rate expressions for a finite M value, and then studied their behavior as M\to \infty . In these papers, the signal-to-noise ratio (SNR) grows linearly with M, and the rate grows towards infinity as \log_2(M) (except when pilot contamination is a limiting factor).

I have carried out such asymptotic analysis myself, but there is one issue that has been disturbing me for a while: The SNR cannot grow without bound in practice because we can never receive more power than what was transmitted from the base station. It doesn’t matter how many transmit antennas that are used or how razor-sharp the beams become, the law of conservation of energy must be respected. So where is the error in the analysis?

The problem is not that M\to \infty implies the use of infinitely large arrays. If we accept that the universe is infinite, it is plausible to create an M-antenna planar array for any value of M, for example, using a \sqrt{M} \times \sqrt{M} grid. Such an array is illustrated in Figure 1.

Figure 1: A planar array with 9 x 9 antennas are used to communicate with a user device.

The actual issue is how the channel gains (pathlosses) between the antennas and the user are modeled. We are used to considering channel models based on far-field approximations, where the channel gain is the same for all antennas (when averaging over small-scale fading). However, as the size of the array grows, the approximate far-field models become inaccurate. Instead, one needs to model the following phenomena that appear in the near-field:

  1. The propagation distance is different between each base station antenna and the user.
  2. The effective antenna areas vary in the array, even if the physical areas are equal, since the antennas are observed from different angles.
  3. The polarization losses vary between the antennas, because of the different angles from the antennas to the user.

It might seem complicated to take these phenomena into account, but the following paper shows how to compute the channel gain exactly when the user is centered in front of the array. I generalized this formula to non-centered users in a new paper. We utilized the new result to study the asymptotic behaviors of Massive MIMO and also intelligent reflecting surfaces. It turned out that all the three aforementioned phenomena are important to get accurate asymptotic results. When transmitting from an isotropic antenna to a planar Massive MIMO array, the total channel gain converges to 1/3 and instead of going to infinity. The remaining 2/3 of the transmit power is lost due to polarization mismatch or radiation into the opposite direction of the array. If any of the first two phenomena are ignored, the channel gain will grow unboundedly as M\to \infty , which is physically impossible. If the last one is ignored, the asymptotic channel gain is overestimated by 50%, so this is the least critical factor.

Although the exact channel model can be used to accurately predict the asymptotic SNR behavior, my experience from studying this topic is that the far-field approximations are accurate in most cases of practical interest. It is first when the propagation distance is shorter than the side length of the array that the near-field phenomena are critically important. In other words, the scaling laws that have been obtained in the Massive MIMO literature are usually applicable in practice, even if they break down asymptotically.

Two Roles of Deep Learning in Massive MIMO

The hype around machine learning, particularly deep learning, has spread over the world. It is not only affecting engineers but also philosophers and government agencies, which try to predict what implications machine learning will have on our society.

When the initial excitement has subsided, I think machine learning will be an important tool that many engineers will find useful, alongside more classical tools such as optimization theory and Fourier analysis. I have spent the last two years thinking about what role deep learning can have in the field of communications. This field is rather different from other areas where deep learning has been successful: We deal with man-made systems that have designed based on rigorous theory to operate close to the fundamental performance limits, for example, the Shannon capacity. Hence, at first sight, there seems to be little room for improvement.

I have nevertheless identified two main applications of supervised deep learning in the physical layer of communication systems: 1) algorithmic approximation and 2) function inversion.

You can read about them in my recent survey paper “Two Applications of Deep Learning in the Physical Layer of Communication Systems” or watch the following video:

In the video, I’m exemplifying the applications through two recent papers where we applied deep learning to improve Massive MIMO systems. Here are links to those papers:

Trinh Van Chien, Emil Björnson, Erik G. Larsson, “Sum Spectral Efficiency Maximization in Massive MIMO Systems: Benefits from Deep Learning,” IEEE International Conference on Communications (ICC), 2019.

Özlem Tugfe Demir, Emil Björnson, “Channel Estimation in Massive MIMO under Hardware Non-Linearities: Bayesian Methods versus Deep Learning,” IEEE Open Journal of the Communications Society, 2020.

Intelligent Reflecting Surfaces: On Use Cases and Path Loss Model

Emerging intelligent reflecting surface (IRS) technology, also known under the names “reconfigurable intelligent surface” and “software-controlled metasurface”, is sometimes marketed as an enabling technology for 6G. How do they work, what are their use cases and how much will they improve wireless access performance at large?

The physical principle of an IRS is that the surface is composed of N atoms, each of which acts as an “intelligent” scatterer: a small antenna that receives and re-radiates without amplification, but with a controllable phase-shift. Typically, an atom is implemented as a small patch antenna terminated with an adjustable impedance. Assuming the phase shifts are properly adjusted, the N scattered wavefronts can be made to add up constructively at the receiver. If coupling between the atoms is neglected, the analysis of an IRS essentially entails (i) finding the Green’s function of the source (a sum of spherical waves if close, or a plane wave if far away), (ii) computing the impinging field at each atom, (iii) integrating this field over the surface of each atom to find a current density, (iv) computing the radiated field from each atom using physical-optics approximation, and (v) applying the superposition principle to find the field at the receiver. If the atoms are electrically small, one can approximate the re-radiated field by pretending the atoms are point sources and then the received “signal” is basically a superposition of phase-shifted (as  e^{jkr}), amplitude-scaled (as 1/r) source signals.

A point worth re-iterating is that an atom is a scatterer, not a “mirror”. A more subtle point is that the entire IRS as such, consisting of a collection of scatterers, is itself also a scatterer, not a mirror. “Mirrors” exist only in textbooks, when a plane wave is impinging onto an infinitely large conducting plate (none of which exist in practice). Irrespective of how the IRS is constructed, if it is viewed from far enough away, its radiated field will have a beamwidth that is inversely proportional to its size measured in wavelengths.

Two different operating regimes of IRSs can be distinguished:

1. Both transmitter and receiver are in the far-field of the surface. Then the waves seen at the surface can be approximated as planar; the phase differential from the surface center to its edge is less than a few degrees, say. In this case the phase shifts applied to each atom should be linear in the surface coordinate. The foreseen use case would be to improve coverage, or provide an extra path to improve the rank of a point-to-point MIMO channel. Unfortunately in this case the transmitter-IRS-path loss scales very unfavorably, as (N/(r_1r_2))^2 where N is the number of meta-atoms in the surface, and the reason is that again, the IRS itself acts as a (large) scatterer, not a “mirror”. Therefore the IRS has to be quite large before it becomes competitive with a standard single-antenna decode-and-forward relay, a simple, well understood technology that can be implemented using already widely available components, at small power consumption and with a small form factor. (In addition, full-duplex technology is emerging and may eventually be combined with relaying, or even massive MIMO relaying.)

2. At least one of the transmitter and the receiver is in the surface near-field. Here the plane-wave approximation is no longer valid. The IRS can then either be sub-optimally configured to act as a “mirror”, in which case the phase shifts vary linearly as function of the surface coordinate. Alternatively, it can be configured to act as a “lens”, with optimized phase-shifts, which is typically better. As shown for example in this paper, in the near-field case the path loss scales more favorably than in the far-field case. The use cases for the near-field case are less obvious, but one can think of perhaps indoor environments with users close to the walls and every wall covered by an IRS. Another potential use case that I learned about recently is to use the IRS as a MIMO transmitter: a single-antenna transmitter near an IRS can be jointly configured to act as a MIMO beamforming array.

So how useful will IRS technology be in 6G? The question seems open. Indoor coverage in niche scenarios, but isn’t this an already solved problem? Outdoor coverage improvement, but then (cell-free) massive MIMO seems to be a much better option? Building MIMO transmitters from a single-antenna seems exciting, but is it better than using conventional RF? Perhaps it is for the Terahertz bands, where implementation of coherent MIMO may prove truly challenging, that IRS technology will be most beneficial.

A final point is that nothing requires the atoms in an IRS to be located adjacently to one another, or even to form a surface! But they are probably easier to coordinate if they are in more or less the same place.

Revitalizing the Research on Wireless Communications in a New Decade

In the last decade, the research on wireless communications has been strongly focused on the development of 5G. Plenty of papers have started with sentences of the kind: “We consider X, which is a promising method that can greatly improve Y in 5G.” For example, X might be Massive MIMO and Y might be the spectral efficiency. We now know which physical-layer methods made it into the first release of the 5G standard, and which did not. It remains to be seen which methods will actually be used in practice and how large performance improvements 5G can deliver.

There is no doubt that the 5G research has been successful. However, it remains is to improve the developed methods to bridge the gap between the simplifying models and assumptions considered in academia and the practical conditions faced by the industry. Although new scientific papers appear on arXiv.org almost every day, few of them focus on these practically important aspects of the 5G development. Instead, minor variations on well-studied problems dominate and the models are the same simplified ones as ten years ago. We seem to be stuck in doing the same things that led to important advances at the beginning of the 2010s, although we have already solved most problems that can be solved using such simple models. This is why I think we need to revitalize the research!

Two concrete examples

The following two examples explain what I mean.

Example 1: Why would we need more papers on Massive MIMO with uncorrelated Rayleigh fading channels and maximum ratio (MR) processing? We already know that practical channels are spatially correlated and other processing methods are vastly superior to MR while having practically affordable complexity.

Example 2: Why would we need more papers on hybrid beamforming design for flat-fading channels? We already know that the hybrid architecture is only meaningful in wideband mmWave communications, in which case the channels feature frequency-selective fading. The generalization is non-trivial since it is mainly under frequency-selective conditions that the weaknesses/challenges of the hybrid approach appear.

I think that the above-mentioned simplifications were well motivated in the early 2010s when many of the seminal papers on Massive MIMO and mmWave communications appeared. It is usually easier to reach ambitious research goals by taking small steps towards them. It is acceptable to make strong simplifications in the first steps, to achieve the analytical tractability needed to develop a basic intuition and understanding. The later steps should, however, gradually move toward more realistic assumptions that also makes the analysis less tractable. We must continuously question if the initial insights apply also under more practical conditions or if they were artifacts of the initial simplifications.

Unfortunately, this happened far too seldom in the last decade. Our research community tends to prioritize analytical tractability over realistic models. If a model has been used in prior work, it can often be reused in new papers without being questioned by the reviewers. When I review a paper and question the system model, the authors usually respond with a list of previous papers that use the same model, rather than the theoretical motivation that I would like to see.

It seems to be far easier to publish papers with simple models that enable derivation of analytical “closed-form” expressions and development of “optimal” algorithms, than to tackle more realistic but challenging models where these things cannot be established. The two examples above are symptoms of this problem. We cannot continue in this way if we want to keep the research relevant in this new decade. Massive MIMO and mmWave communications will soon be mainstream technologies!

Entering a new decade

The start of the 2020s is a good time for the research community to start over and think big. Massive MIMO was proposed in a paper from 2010 and initially seemed too good to be true, possibly due to the simplicity of the models used in the early works. In a paper that appeared in 2015, we identified ten “myths” that had flourished when people with a negative attitude against the technology tried to pinpoint why it wouldn’t work in practice. Today – a decade after its inception – Massive MIMO is a key 5G technology and has even become a marketing term used by cellular operators. The US operator Sprint has reported that the first generation of Massive MIMO base stations improve the spectral efficiency by around 10x in their real networks.

I believe the history will repeat itself during this decade. The research into the next big physical layer technology will take off this year – we just don’t know what it will be. There are already plenty of non-technical papers that try to make predictions, so the quota for such papers is already filled. I’ve written one myself entitled “Massive MIMO is a Reality – What is Next? Five Promising Research Directions for Antenna Arrays”. What we need now are visionary technical papers (like the seminal Massive MIMO paper by Marzetta) that demonstrate mathematically how a new technology can achieve ten-fold performance improvements over the state-of-the-art, for example, in terms of spectral efficiency, reliability, latency, or some other relevant metric. Maybe one or two of the research directions listed in my paper will be at the center of 6G. Much research work remains before we can know, thus this is the right time to explore a wide range of new ideas.

The start of a new decade is a good time for the research community to start over and to think big. Massive MIMO was proposed in a paper from 2010 and initially seemed too good to be true, possibly due to the simplicity of the system models used in the early works. In a paper that appeared in 2015, we identified ten “myths” that had flourished when people tried to pinpoint why Massive MIMO wouldn’t work in practice. Today – a decade after its inception – Massive MIMO is a key 5G technology that has even become a marketing term used by cellular operators. It has been shown to improve the spectral efficiency by 10x in real networks.

I believe that the same procedure will repeat itself during this decade. The research into the next big physical layer technology will take off this year – we just don’t know what it will be. There are already plenty of non-technical papers that try to make predictions, so that quota has already been filled. I’ve written one myself entitled “Massive MIMO is a Reality – What is Next? Five Promising Research Directions for Antenna Arrays”. However, what we really need is visionary technical papers (like the seminal Massive MIMO paper by Marzetta) that demonstrate how we can actually achieve, say, ten-fold performance improvements over the state-of-the-art, concerning spectral efficiency, reliability, latency, or some other relevant metric. Maybe one or two of the research directions listed in my paper will become the main thing in 6G – much further work is needed before we can know.

Five ways to revitalize the research

To keep the wireless communication research relevant, we should stop considering minor variations on previously solved problems and instead focus either on implementation aspects of 5G or on basic research into entirely new methods that might eventually play a role in 6G. In both cases, I have the following five recommendations for how we can conduct more efficient and relevant research in this new decades.

1. We may start the research on new topics by using simplified models that are analytically tractable, but we must not get stuck in using those models. A beautiful analysis obtained with an unrealistic model might cause more confusion than it generates new practical insights. Just remember how a lot of Massive MIMO research focused on the pilot contamination problem, just because it happened to be the asymptotically limiting factor when using simplified models, while it is not the case in general.

2. We must be more respectful towards the underlying physics, particularly, electromagnetic theory. We cannot continue normalizing the pathloss variables or guesstimate how they can be computed. When developing a new technology, we must first get the basic models right. Otherwise we risk making fundamental mistakes and – even worse – trick others into repeating those mistakes for years to come. I covered the danger of normalization in a previous blog post.

3. We must not forget about previous methods when evaluating new methods but think carefully about what the true state-of-the-art is. For example, if we want to improve the performance of a cellular network by adding new equipment, we must compare it to existing equipment that could alternatively been added. For example, I covered the importance of comparing intelligent reflecting surfaces with relays in a previous blog post.

4. We must make sure new algorithms are reproducible and easily comparable, so that every paper is making useful progress. This can be achieved by publishing simulation code alongside papers and evaluating new algorithms in the same setup as previous algorithms. We might take inspiration from the machine learning field where ImageNet is a common benchmark.

5. We must not take the correctness of models and results in published papers for granted. This is particularly important nowadays when new IEEE journals with very short review times are getting traction. Few scientific experts can promise to make a proper review of a full-length paper in just seven days; thus, many reviews will be substandard. This is a step in the wrong direction and can severely reduce the quality and trustworthiness of published papers.

Let us all make an effort to revitalize the research methodology and selection of research problems to solve in the 2020s. If you have further ideas please share them in the comment field!

Dynamic Cooperation Clusters

By deploying many distributed antennas instead of a few multi-antenna base stations, a more uniform communication performance can be achieved over a coverage area. The peak rates might go down but there is a much higher chance of getting a decent rate with 95% probability. This is the main motivation behind Cell-free Massive MIMO, which is the new name for Network MIMO with a large number of antennas (many more than the number of users). The key difference from conventional ultra-dense networks is that the distributed antennas are cooperating to transmit phase-coherently in the downlink and process the received uplink signals coherently. One promising way to deploy these systems is by using radio stripes.

The first papers on Cell-free Massive MIMO assumed that all antennas have access to the downlink data of all users and take part in the uplink signal detection of all users. This is both impractical and unnecessary in a large network, where each user is only physically close to a subset of the antennas. Hence, it makes practical sense that only those antennas that can reach the user with a signal power that is non-negligible compared to the thermal noise should transmit to that user and participate in the detection of its uplink data. 

I designed a framework for this 10 years ago, which I called “dynamic cooperation clusters” (DCC) and it can be readily applied to Cell-free Massive MIMO. The main idea was that every user selects which antennas should serve it in a user-centric manner, which means that any antenna subset can be selected. This stands in contrast to the conventional network-centric approach (which dominated the 4G CoMP literature) where only certain predefined disjoint groups of antennas are allowed to cooperate.

Although the DCC framework is a perfect fit for Cell-free Massive MIMO, the performance analysis that we did 10 years ago was admittedly simplified compared to what is possible with the latest methodology. We considered TDD systems that utilize reciprocity but assumed slowly fading channels that can be estimated without error, thereby avoiding pilot contamination and the computation of ergodic rates. To provide a bridge to the past, we wrote the paper “Scalable Cell-Free Massive MIMO Systems” which revisits the DCC framework in the context of Cell-free Massive MIMO, using the latest analytical methods from the Massive MIMO literature. 

Most importantly, the new paper provides an intuitive way to select the user-centric cooperation clusters based on the uplink pilot transmissions. When a user connects to the network, we suggest that the antenna with the best channel condition is given the responsibility to guarantee the user service. The user is assigned to the pilot that is least affected by pilot contamination in that particular region. Moreover, all antennas serve as many users as there are pilots; at most one user per pilot to limit the negative effect of pilot contamination. Under these assumptions, we show that the users get nearly the same rates as if all the antennas serve all users, but with greatly reduced complexity and fronthaul requirements. In conclusion, scalable and well-performing implementations of Cell-free Massive MIMO are possible!

The following video explains the main ideas: