All posts by Erik G. Larsson

IEEE Globecom Workshop on Wireless Communications for Distributed Intelligence

The 2021 IEEE GLOBECOM workshop on “Wireless Communications for Distributed Intelligence” will be held in Madrid, Spain, in December 2021. This workshop aims at investigating and re-defining the roles of wireless communications for decentralized Artificial Intelligence (AI) systems, including distributed sensing, information processing, automatic control, learning and inference.

We invite submissions of original works on the related topics, which include but are not limited to the following:

  • Network architecture and protocol design for AI-enabled 6G
  • Federated learning (FL) in wireless networks
  • Multi-agent reinforcement learning in wireless networks
  • Communication efficiency in distributed machine learning (ML)
  • Energy efficiency in distributed ML
  • Cross-layer (PHY, MAC, network layer) design for distributed ML
  • Wireless resource allocation for distributed ML
  • Signal processing for distributed ML
  • Over-the-air (OTA) computation for FL
  • Emerging PHY technologies for OTA FL
  • Privacy and security issues of distributed ML
  • Adversary-resilient distributed sensing, learning, and inference
  • Fault tolerance in distributed stochastic gradient descent (DSGD) systems
  • Fault tolerance in multi-agent systems
  • Fundamental limits of distributed ML with imperfect communication

6G Physical layer development: the race is on

A new EU-funded 6G initiative, the REINDEER project, joins forces from academia and industry to develop and build a new type of multi-antenna-based smart connectivity platform integral to future 6G systems. From Ericsson’s new site:

The project’s name is derived from REsilient INteractive applications through hyper Diversity in Energy-Efficient RadioWeaves technologyand the development of “RadioWeaves” technology will be a key deliverable of the project. This new wireless access infrastructure consists of a fabric of distributed radio, compute and storage. It will advance the ideas of large-scale intelligent surfaces and cell-free wireless access to offer capabilities far beyond future 5G networks. This is expected to offer capacity scalable to quasi-infinite, and perceived zero latency and interaction with a large number of embedded devices.

Read more: Academic paper on the RadioWeaves concept (Asilomar SSC 2019) 

Who is Who in Massive MIMO?

I taught a course on complex networks this fall, and one component of the course is a hands-on session where students use the SNAP C++ and Python libraries for graph analysis, and Gephi for visualization. One available dataset is DBLP, a large publication database in computer science, that actually includes a lot of electrical engineering as well.

In a small experiment I filtered DBLP for papers with both “massive” and “MIMO” in the title, and analyzed the resulting co-author graph. There are 17200 papers and some 6000 authors.  There is a large connected component, with over 400 additional much smaller connected components!

Then I looked more closely at authors who have written at least 20 papers. Each node is an author, its size is proportional to his/her number of “massive MIMO papers”, and its color represents identified communities. Edge thicknesses represent the number of co-authored papers.  Some long-standing collaborators, former students, and other friends stand out.  (Click on the figure to enlarge it.)

To remind readers of the obvious, prolificacy is not the same as impact, even though they are often correlated. Also, the study is not entirely rigorous. For one thing, it trusts that DBLP properly distinguishes authors with the same name (consider e.g., “Li Li”) and I do not know how well it really does that. Second, in a random inspection all papers I had filtered out dealt with “massive MIMO” as we know it. However, theoretically, the search criterion would also catch papers on, say, MIMO control theory for a massive power plant.  Also, the filtering does miss some papers written before the “massive MIMO” term was established, perhaps most importantly Thomas Marzetta’s seminal paper on “unlimited antennas”.  Third, the analysis is limited to publications covered by DBLP, which also means, conversely, that there is no specific quality threshold for the publication venues. Anyone interested in learning more, drop me a note. 

Record 5G capacity via software upgrade!

In the news: Nokia delivers record 5G capacity gains through a software upgrade.   No surprise!  We expected, years ago, this would happen.

What does this software upgrade consist of?  I can only speculate.  It is, in all likelihood, more than the usual (and endless) operating system bugfixes we habitually think of as “software upgrades”.   Could it be even something that goes to the core of what massive MIMO is?  Replacing eigen-beamforming with true reciprocity-based beamforming?! Who knows. Replacing maximum-ratio processing with zero-forcing combining?!  Or even more mind-boggling, implementing more sophisticated processing of the sort that has been stuffing the academic journals in the last years? We don’t know!  But it will certainly be interesting to find out at some point, and it seems safe to assume that this race will continue.  

A lot of improvement could be achieved over the baseline canonical massive MIMO processing. One could, for example, exploit fading correlation, develop improved power control algorithms or implement algorithms that learn the propagation environment, autonomously adapt, and predict the channels.  

It might seem that research already squeezed every drop out of the physical layer, but I do not think so.  Huge gains likely remain to be harvested when resources are tight, and especially we are limited by coherence: high carriers means short coherence, and high mobility might mean almost no coherence at all.  When the system is starved of coherence, then even winning a couple of samples on the pilot channel means a lot.  Room for new elegant theory in “closed form”?  Good question. Could sound heartbreaking, but maybe we have to give up on that.  Room for useful algorithms and innovation? Certainly yes.  A lot.  The race will continue.

Intelligent Reflecting Surfaces: On Use Cases and Path Loss Model

Emerging intelligent reflecting surface (IRS) technology, also known under the names “reconfigurable intelligent surface” and “software-controlled metasurface”, is sometimes marketed as an enabling technology for 6G. How do they work, what are their use cases and how much will they improve wireless access performance at large?

The physical principle of an IRS is that the surface is composed of N atoms, each of which acts as an “intelligent” scatterer: a small antenna that receives and re-radiates without amplification, but with a controllable phase-shift. Typically, an atom is implemented as a small patch antenna terminated with an adjustable impedance. Assuming the phase shifts are properly adjusted, the N scattered wavefronts can be made to add up constructively at the receiver. If coupling between the atoms is neglected, the analysis of an IRS essentially entails (i) finding the Green’s function of the source (a sum of spherical waves if close, or a plane wave if far away), (ii) computing the impinging field at each atom, (iii) integrating this field over the surface of each atom to find a current density, (iv) computing the radiated field from each atom using physical-optics approximation, and (v) applying the superposition principle to find the field at the receiver. If the atoms are electrically small, one can approximate the re-radiated field by pretending the atoms are point sources and then the received “signal” is basically a superposition of phase-shifted (as  e^{jkr}), amplitude-scaled (as 1/r) source signals.

A point worth re-iterating is that an atom is a scatterer, not a “mirror”. A more subtle point is that the entire IRS as such, consisting of a collection of scatterers, is itself also a scatterer, not a mirror. “Mirrors” exist only in textbooks, when a plane wave is impinging onto an infinitely large conducting plate (none of which exist in practice). Irrespective of how the IRS is constructed, if it is viewed from far enough away, its radiated field will have a beamwidth that is inversely proportional to its size measured in wavelengths.

Two different operating regimes of IRSs can be distinguished:

1. Both transmitter and receiver are in the far-field of the surface. Then the waves seen at the surface can be approximated as planar; the phase differential from the surface center to its edge is less than a few degrees, say. In this case the phase shifts applied to each atom should be linear in the surface coordinate. The foreseen use case would be to improve coverage, or provide an extra path to improve the rank of a point-to-point MIMO channel. Unfortunately in this case the transmitter-IRS-path loss scales very unfavorably, as (N/(r_1r_2))^2 where N is the number of meta-atoms in the surface, and the reason is that again, the IRS itself acts as a (large) scatterer, not a “mirror”. Therefore the IRS has to be quite large before it becomes competitive with a standard single-antenna decode-and-forward relay, a simple, well understood technology that can be implemented using already widely available components, at small power consumption and with a small form factor. (In addition, full-duplex technology is emerging and may eventually be combined with relaying, or even massive MIMO relaying.)

2. At least one of the transmitter and the receiver is in the surface near-field. Here the plane-wave approximation is no longer valid. The IRS can then either be sub-optimally configured to act as a “mirror”, in which case the phase shifts vary linearly as function of the surface coordinate. Alternatively, it can be configured to act as a “lens”, with optimized phase-shifts, which is typically better. As shown for example in this paper, in the near-field case the path loss scales more favorably than in the far-field case. The use cases for the near-field case are less obvious, but one can think of perhaps indoor environments with users close to the walls and every wall covered by an IRS. Another potential use case that I learned about recently is to use the IRS as a MIMO transmitter: a single-antenna transmitter near an IRS can be jointly configured to act as a MIMO beamforming array.

So how useful will IRS technology be in 6G? The question seems open. Indoor coverage in niche scenarios, but isn’t this an already solved problem? Outdoor coverage improvement, but then (cell-free) massive MIMO seems to be a much better option? Building MIMO transmitters from a single-antenna seems exciting, but is it better than using conventional RF? Perhaps it is for the Terahertz bands, where implementation of coherent MIMO may prove truly challenging, that IRS technology will be most beneficial.

A final point is that nothing requires the atoms in an IRS to be located adjacently to one another, or even to form a surface! But they are probably easier to coordinate if they are in more or less the same place.

Scalable Cell-Free Massive MIMO

Cell-free massive MIMO is likely one of the technologies that will form the backbone of any xG with x>5. What distinguishes cell-free massive MIMO from distributed MIMO, network MIMO or cooperative multi-point (CoMP)? The short answer is that cell-free massive MIMO works, it can deliver uniformly good service throughout the coverage area, and it requires no prior knowledge of short-term CSI (just like regular cellular massive MIMO). A longer answer is here. The price to pay for this superiority, no shock, is the lack of scalability: for canonical cell-free massive MIMO there is a practical limit on how large the system can be, and this scalability concerns both the power control, the signal processing, and the organization of the backhaul.

At ICC this year we presented this approach towards scalable cell-free massive MIMO. A key insight is that power control is extremely vital for performance, and a scalable cell-free massive MIMO solution requires a scalable power control policy. No surprise, some performance must be sacrificed relative to canonical cell-free massive MIMO. Coincidentally, another paper in the same session (WC-26) also devised a power control policy with similar qualities!

Take-away point? There are only three things that matter for the design of cell-free massive MIMO signal processing algorithms and power control policies: scalability, scalability and scalability…