All posts by Erik G. Larsson

Efficient DSP and Circuit Architectures for Massive MIMO: State-of-the-Art and Future Directions

Come listen to Liesbet Van der Perre, Professor at KU Leuven (Belgium) on Monday February 18 at 2.00 pm EST.

She gives a webinar on state-of-the-art circuit implementations of Massive MIMO, and outlines future research challenges. The webinar is based on, among others, this paper.

In more detail the webinar will summarize the fundamental technical contributions to efficient digital signal processing for Massive MIMO. The opportunities and constraints on operating on low-complexity RF and analog hardware chains are clarified. It will explain how terminals can benefit from improved energy efficiency. The status of technology and real-life prototypes will be discussed. Open challenges and directions for future research are suggested.

Listen to the webinar by following this link.

Could chip-scale atomic clocks revolutionize wireless access?

This chip-scale atomic clock (CSAC) device, developed by Microsemi, brings atomic clock timing accuracy (see the specs available in the link) in a volume comparable to a matchbox, and 120 mW power consumption.  This is way too much for a handheld gadget, but undoubtedly negligible for any fixed installation powered from the grid.  An alternative to synchronization through GNSS that works anywhere, including indoor in GNSS-denied environments.

I haven’t seen a list price, and I don’t know how much exotic metals and what licensing costs that its manufacture requires, but let’s ponder the possibility that a CSAC could be manufactured for the mass-market for a few dollars each. What new applications would then become viable in wireless?

The answer is mostly (or entirely) speculation. One potential application that might become more practical is positioning using distributed arrays.  Another is distributed multipair relaying. Here and here are some specific ideas that are communication-theoretically beautiful, and probably powerful, but that seem to be perceived as unrealistic because of synchronization requirements. Perhaps CoMP and distributed MIMO, a.k.a. “cell-free Massive MIMO”, applications could also benefit.

Other applications might be applications for example in IoT, where a device only sporadically transmits information and wants to stay synchronized (perhaps there is no downlink, hence no way of reliably obtaining synchronization information).  If a timing offset (or frequency offset for that matter) is unknown but constant over a very long time, it may be treated as a deterministic unknown and estimated. The difficulty with unknown time and frequency offsets is not their existence per se, but the fact that they change quickly over time.

It’s often said (and true) that the “low” speed of light is the main limiting factor in wireless.  (Because channel state information is the main limiting factor of wireless communications.  If light were faster, then channel coherence would be longer, so acquiring channel state information would be easier.) But maybe the unavailability of a ubiquitous, reliable time reference is another, almost as important, limiting factor. Can CSAC technology change that?  I don’t know, but perhaps we ought to take a closer look.

Massive MIMO Hardware Distortion Measured in the Lab

I wrote this paper to make a single point: the hardware distortion (especially out-band radiation) stemming from transmitter nonlinearities in massive MIMO is a deterministic function of the transmitted signals. One consequence of this is that in most cases of practical relevance, the distortion is correlated among the antennas. Specifically, under line-of-sight propagation conditions this distortion is radiated in specific directions: in the single-user case the distortion is radiated into the same direction as the signal of interest, and in the two-user case the distortion is radiated into two other directions.

The derivation was based on a very simple third-order polynomial model. Questioning that model, or contesting the conclusions? Let’s run WebLab. WebLab is a web-server-based interface to a real power amplifier operating in the lab, developed and run by colleagues at Chalmers University of Technology in Sweden. Anyone can access the equipment in real time (though there might be a queue) by submitting a waveform and retrieving the amplified waveform using a special Matlab function, “weblab.m”, obtainable from their webpages. Since accurate characterization and modeling of amplifiers is a hard nonlinear identification problem, WebLab is a great tool to researchers who want to go beyond polynomial and truncated Volterra-type toy models.

A $\lambda/2$-spaced uniform linear array with 50 elements beamforms in free space line-of-sight to two terminals at (arbitrarily chosen) angles -9 respectively +34 degrees. A sinusoid with frequency $f_1=\pi/10$ is sent to the first terminal, and a sinusoid with frequency $f_2=2\pi/10$ is transmitted to the other terminal. (Frequencies are in discrete time, see the Weblab documentation for details.) The actual radiation diagram is computed numerically: line-of-sight in free space is fairly uncontroversial: superposition for wave propagation applies. However, importantly, the actual amplification all signals is run on actual hardware in the lab.

The computed radiation diagram is shown below. (Some lines overlap.) There are two large peaks at -9 and +34 degrees angle, corresponding to the two signals of interest with frequencies $f_1$ and $f_2$. There are also secondary peaks, at angles approximately -44 and -64 degrees, at frequencies different from $f_1$ respectively $f_2$. These peaks originate from intermodulation products, and represent the out-band radiation caused by the amplifier non-linearity. (Homework: read the paper and verify that these angles are equal to those predicted by the theory.)

The Matlab code for reproduction of this experiment can be downloaded here.

Three Highlights from ICC 2018

Three massive-MIMO-related highlights from IEEE ICC in Kansas City, MO, USA, this week:

  1. J. H. Thompson from Qualcomm gave a keynote on 5G, relaying several important insights. He stressed the fundamental role of Massive MIMO, utilizing reciprocity (which in turn, of course, implies TDD). This is a message we have been preaching for years now, and it is reassuring to hear a main industry leader echo it at such an important event. He pointed to distributed Massive MIMO (that we know of as “cell-free massive MIMO“) as a forthcoming technology, not only because of the macro-diversity but also because of the improved channel rank it offers to multiple-antenna terminals. This new technology may enable AR/VR/XR, wireless connectivity in factories and much more… where conventional massive MIMO might not be sufficient.
  2. In the exhibition hall Nokia showcased a 64×2=128 Massive MIMO array, with fully digital transceiver chains, small dual-polarized path antennas, operating at 2.5 GHz and utilizing reciprocity – though it wasn’t clear exactly what algorithmic technology that went inside. (See photographs below.) Sprint already has deployed this product commercially, if I understood well, with an LTE TDD protocol. Ericsson had a similar product, but it was not opened, so difficult to tell exactly what the actual array looked like. The Nokia base station was only slightly larger, physically, than the flat-screen-base-station vision I have been talking about for many years now, and along the lines that T. Marzetta from Bell Labs had already back in 2006. Now that cellular Massive MIMO is a commercial reality… what should the research community do? Granted there are still lots of algorithmic innovation possible (and needed), but …. Cell-free massive MIMO with RF over fiber is the probably the obvious next step.
  3. T. Marzetta from NYU gave an industry distinguished talk, speculating about the future of wireless beyond Massive MIMO. What, if anything at all, could give us another 10x or 100x gain? A key point of the talk was that we have to go back to (wave propagation) physics and electromagnetics, a message that I very much subscribe to: the “y=Hx+w” models we typically use in information and communication theory are in many situations rather oversimplified. Speculations included the use of super-directivity, antenna coupling and more… It will be interesting to see where this leads, but at any rate, it is interesting fundamental physics.

There were also lots of other (non-Massive MIMO) interesting things: UAV connectivity, sparsity… and a great deal of questions and discussion on how machine learning could be leveraged, more about that at a later point in time.

I was wrong: Two incorrect speculations

Our 2014 massive MIMO tutorial paper won the IEEE ComSoc best tutorial paper award this year. The idea when writing that paper was to summarize the state of the technology, and to point out research directions that were relevant (at that time). It is of course, reassuring to see that many of those research directions evolved into entire sub-fields themselves in our community. Naturally, in the envisioning of these directions I also made some speculations.

It looks to me now that two of these speculations were wrong:

  • First, “Massive MIMO increases the robustness against both unintended man-made interference and intentional jamming.” This is only true with some qualifiers, or possibly not true at all. (Actually I don’t really know, and I don’t think it is known for sure. It seems that this question remains a rather pertinent research direction for anyone interested in physical layer security and MIMO.) Subsequent research by others showed that Massive MIMO can be extraordinarily susceptible to attacks on the pilot channels, revealing an important, fundamental vulnerability at least if standard pilot-based channel estimation is used and no excess dimensions are “wasted” on interference suppression or detection. Basically this pilot channel attack exploits the so-called pilot contamination phenomenon, “hijacking” the reciprocity-based beamforming mechanism.
  • Second, “In a way, massive MIMO relies on the law of large numbers to make sure that noise, fading, and hardware imperfections average out when signals from a large number of antennas are combined in the air.” This is not generally true, except for in-band distortion and with many simultaneously multiplexed users and frequency selective Rayleigh fading. In general the distortion that results from hardware imperfections is correlated among the antennas. In the special case of line-of-sight with a single terminal, an important basic reference case, the distortion is identical (up to a phase shift) at all antennas, hence resulting in a rank-one transmission: the distortion is beamformed in the same direction as the signal of interest and hardware imperfections do not “average out” at all.
    This is particularly serious for out-band effects. Readers interested in a thorough mathematical treatment may consult my student’s recent Ph.D. dissertation.

Have you found any more? Let me know. The knowledge in the field continues to evolve.