Free PDF of Massive MIMO Networks

The textbook Massive MIMO Networks: Spectral, Energy, and Hardware Efficiency, that I’ve written together with Jakob Hoydis and Luca Sanguinetti, is from now on available for free download from https://massivemimobook.com. If you want a physical copy, you can buy the color-printed hardback edition from now publishers and major online shops, such as Amazon.

You can read more about this book in a previous blog post and also watch this new video, where I talk about the content and motivation behind the writing of the book.

Massive MIMO at the World Cup

Massive MIMO supports an order of magnitude higher spectral efficiency than legacy LTE networks. The largest gains come from spatial multiplexing of many users per cell, thus these gains can only be harvested when there are many users requesting data at every given millisecond, which requires larger traffic loads than you might think since many seemingly continuous user applications only send data sporadically.

For this reason, I used to say that outdoor musical festivals, where a crowd of 100,000 people gather to see their favorite bands, would be a first deployment scenario for Massive MIMO. This is fairly similar to what now has happened: The Russian telecom operator MTS has deployed more than 40 state-of-the-art LTE sites with Massive MIMO functionality in seven cities where the 2018 FIFA World Cup in football is currently taking place. The base stations are deployed to cover the stadiums, fan zones, airports, train stations, and major parks/squares; in other words, the places where huge crowds of football fans are expected.

In the press release, Andrei Ushatsky, Vice President of MTS, says:

Ericsson AIR 6468 base station array with 64 antennas, which is deployed in Russia

“This launch is one of Europe’s largest Massive MIMO deployments, covering seven Russian cities, and is a major contribution by MTS in the preparation of the country’s infrastructure for the global sporting event of the year. Our Massive MIMO technology, using Ericsson equipment, significantly increases network capacity, allowing tens of thousands of fans together in one place to enjoy high-speed mobile internet without any loss in speed or quality.”

While this is one of the first major deployments of Massive MIMO, more will certainly follow in the coming years. More research into the development and implementation of advanced signal processing and resource management schemes will also be needed for many years to come – this is just the beginning.

Disadvantages with TDD

LTE was designed to work equally well in time-division duplex (TDD) and frequency division duplex (FDD) mode, so that operators could choose their mode of operation depending on their spectrum licenses. In contrast, Massive MIMO clearly works at its best in TDD, since the pilot overhead is prohibitive in FDD (even if there are some potential solutions that partially overcome this issue).

Clearly, we will see a larger focus on TDD in future networks, but there are some traditional disadvantages with TDD that we need to bear in mind when designing these networks. I describe the three main ones below.

Link budget

Even if we allocate the same amount of time-frequency resources to uplink and downlink in TDD and FDD operation, there is an important difference. We transmit over half the bandwidth all the time in FDD, while we transmit over the whole bandwidth half of the time in TDD.  Since the power amplifier is only active half of the time, if the peak power is the same, the average radiated power is effectively cut in half. This means that the SNR is 3 dB lower in TDD than in FDD, when transmitting at maximum peak power.

Massive MIMO systems are generally interference-limited and uses power control to assign a reduced transmit power to most users, thus the impact of the 3 dB SNR loss at maximum peak power is immaterial in many cases. However, there will always be some unfortunate low-SNR users (e.g., at the cell edge) that would like to communicate at maximum peak power in both uplink and downlink, and therefore suffer from the 3 dB SNR loss. If these users are still able to connect to the base station, the beamforming gain provided by Massive MIMO will probably more than compensate for the loss in link budget as compared single-antenna systems. One can discuss if it should be the peak power or average radiated power that is constrained in practice.

Guard period

Everyone in the cell should operate in uplink and downlink mode at the same time in TDD. Since the users are at different distances from the base station and have different delay spreads, they will receive the end of the downlink transmission block at different time instances. If a cell center user starts to transmit in the uplink immediately after receiving the full downlink block, then users at the cell edge will receive a combination of the delayed downlink transmission and the cell center users’ uplink transmissions. To avoid such uplink-downlink interference, there is a guard period in TDD so that all users wait with uplink transmission until the outmost users are done with the downlink.

In fact, the base station gives every user a timing bias to make sure that when the uplink commences, the users’ uplink signals are received in a time-synchronized fashion at the base station. Therefore, the outmost users will start transmitting in the uplink before the cell center users. Thanks to this feature, the largest guard period is needed when switching from downlink to uplink, while the uplink to downlink switching period can be short. This is positive for Massive MIMO operation since we want to use uplink CSI in the next downlink block, but not the other way around.

The guard period in TDD must become larger when the cell size increases, meaning that a larger fraction of the transmission resources disappears. Since no guard periods are needed in FDD, the largest benefits of TDD will be seen in urban scenarios where the macro cells have a radius of a few hundred meters and the delay spread is short.

Inter-cell synchronization

We want to avoid interference between uplink and downlink within a cell and the same thing applies for the inter-cell interference. The base stations in different cells should be fairly time-synchronized so that the uplink and downlink take place at the same time; otherwise, it might happen that a cell-edge user receives a downlink signal from its own base station and is interfered by the uplink transmission from a neighboring user that connects to another base station.

This can also be an issue between telecom operators that use neighboring frequency bands. There are strict regulations on the permitted out-of-band radiation, but the out-of-band interference can anyway be larger than the desired inband signal if the interferer is very close to the receiving inband user. Hence, it is preferred that the telecom operators are also synchronizing their switching between uplink and downlink.

Summary

Massive MIMO will bring great gains in spectral efficiency in future cellular networks, but we should not forget about the traditional disadvantages of TDD operation: 3 dB loss in SNR at peak power transmission, larger guard periods in larger cells, and time synchronization between neighboring base stations.

Are 1-bit ADCs Meaningful?

Contemporary base stations are equipped with analog-to-digital converters (ADCs) that take samples described by 12-16 bits. Since the communication bandwidth is up to 100 MHz in LTE Advanced, a sampling rate of a 500 Msample/s is quite sufficient for the ADC. The power consumption of such an ADC is at the order of 1 W. Hence, in a Massive MIMO base station with 100 antennas, the ADCs would consume around 100 W!

ADC SymbolFortunately, the 1600 bit/sample that are effectively produced by 100 16-bit ADCs are much more than what is needed to communicate at practical SINRs. For this reason, there is plenty of research on Massive MIMO base stations equipped with lower-resolution ADCs. The use of 1-bit ADCs has received particular attention. Some good paper references are provided in a previous blog post: Are 1-bit ADCs sufficient? While many early works considered narrowband channels, recent papers (e.g., Quantized massive MU-MIMO-OFDM uplink) have demonstrated that 1-bit ADCs can also be used in practical frequency-selective wideband channels. I’m impressed by the analytical depth of these papers, but I don’t think it is practically meaningful to use 1-bit ADCs.

Do we really need 1-bit ADCs?

I think the answer is no in most situations. The reason is that ADCs with a resolution of around 6 bits strike a much better balance between communication performance and power consumption. The state-of-the-art 6-bit ADCs are already very energy-efficient. For example, the paper “A 5.5mW 6b 5GS/S 4×-lnterleaved 3b/cycle SAR ADC in 65nm CMOS” from ISSCC 2015 describes a 6-bit ADC that consumes 5.5 mW and has a huge sampling rate of 5 Gsample/s, which is sufficient even for extreme mmWave applications with 1 GHz of bandwidth. In a base station equipped with 100 of these 6-bit ADCs, less than 1 W is consumed by the ADCs. That will likely be a negligible factor in the total power consumption of any base station, so what is the point in using a lower resolution than that?

The use of 1-bit ADCs comes with a substantial loss in communication rate. In contrast, there is a consensus that Massive MIMO with 3-5 bits per ADC performs very close to the unquantized case (see Paper 1Paper 2, Paper 3, Paper 4Paper 5). The same applies for 6-bit ADCs, which provide an additional margin that protects against strong interference. Note that there is nothing magical with 6-bit ADCs; maybe 5-bit or 7-bit ADCs will be even better, but I don’t think it is meaningful to use 1-bit ADCs.

Will 1-bit ADCs ever become useful?

To select a 1-bit ADC, instead of an ADC with higher resolution, the energy consumption of the receiving device must be extremely constrained. I don’t think that will ever be the case in base stations, because the power amplifiers are dominating their energy consumption. However, the case might be different for internet-of-things devices that are supposed to run for ten years on the same battery. To make 1-bit ADCs meaningful, we need to greatly simplify all the other hardware components as well. One potential approach is to make a dedicated spatial-temporal waveform design, as described in this paper.

Massive MIMO Hardware Distortion Measured in the Lab

I wrote this paper to make a single point: the hardware distortion (especially out-band radiation) stemming from transmitter nonlinearities in massive MIMO is a deterministic function of the transmitted signals. One consequence of this is that in most cases of practical relevance, the distortion is correlated among the antennas. Specifically, under line-of-sight propagation conditions this distortion is radiated in specific directions: in the single-user case the distortion is radiated into the same direction as the signal of interest, and in the two-user case the distortion is radiated into two other directions.

The derivation was based on a very simple third-order polynomial model. Questioning that model, or contesting the conclusions? Let’s run WebLab. WebLab is a web-server-based interface to a real power amplifier operating in the lab, developed and run by colleagues at Chalmers University of Technology in Sweden. Anyone can access the equipment in real time (though there might be a queue) by submitting a waveform and retrieving the amplified waveform using a special Matlab function, “weblab.m”, obtainable from their webpages. Since accurate characterization and modeling of amplifiers is a hard nonlinear identification problem, WebLab is a great tool to researchers who want to go beyond polynomial and truncated Volterra-type toy models.

A $\lambda/2$-spaced uniform linear array with 50 elements beamforms in free space line-of-sight to two terminals at (arbitrarily chosen) angles -9 respectively +34 degrees. A sinusoid with frequency $f_1=\pi/10$ is sent to the first terminal, and a sinusoid with frequency $f_2=2\pi/10$ is transmitted to the other terminal. (Frequencies are in discrete time, see the Weblab documentation for details.) The actual radiation diagram is computed numerically: line-of-sight in free space is fairly uncontroversial: superposition for wave propagation applies. However, importantly, the actual amplification all signals is run on actual hardware in the lab.

The computed radiation diagram is shown below. (Some lines overlap.) There are two large peaks at -9 and +34 degrees angle, corresponding to the two signals of interest with frequencies $f_1$ and $f_2$. There are also secondary peaks, at angles approximately -44 and -64 degrees, at frequencies different from $f_1$ respectively $f_2$. These peaks originate from intermodulation products, and represent the out-band radiation caused by the amplifier non-linearity. (Homework: read the paper and verify that these angles are equal to those predicted by the theory.)

The Matlab code for reproduction of this experiment can be downloaded here.

Three Highlights from ICC 2018

Three massive-MIMO-related highlights from IEEE ICC in Kansas City, MO, USA, this week:

  1. J. H. Thompson from Qualcomm gave a keynote on 5G, relaying several important insights. He stressed the fundamental role of Massive MIMO, utilizing reciprocity (which in turn, of course, implies TDD). This is a message we have been preaching for years now, and it is reassuring to hear a main industry leader echo it at such an important event. He pointed to distributed Massive MIMO (that we know of as “cell-free massive MIMO“) as a forthcoming technology, not only because of the macro-diversity but also because of the improved channel rank it offers to multiple-antenna terminals. This new technology may enable AR/VR/XR, wireless connectivity in factories and much more… where conventional massive MIMO might not be sufficient.
  2. In the exhibition hall Nokia showcased a 64×2=128 Massive MIMO array, with fully digital transceiver chains, small dual-polarized path antennas, operating at 2.5 GHz and utilizing reciprocity – though it wasn’t clear exactly what algorithmic technology that went inside. (See photographs below.) Sprint already has deployed this product commercially, if I understood well, with an LTE TDD protocol. Ericsson had a similar product, but it was not opened, so difficult to tell exactly what the actual array looked like. The Nokia base station was only slightly larger, physically, than the flat-screen-base-station vision I have been talking about for many years now, and along the lines that T. Marzetta from Bell Labs had already back in 2006. Now that cellular Massive MIMO is a commercial reality… what should the research community do? Granted there are still lots of algorithmic innovation possible (and needed), but …. Cell-free massive MIMO with RF over fiber is the probably the obvious next step.
  3. T. Marzetta from NYU gave an industry distinguished talk, speculating about the future of wireless beyond Massive MIMO. What, if anything at all, could give us another 10x or 100x gain? A key point of the talk was that we have to go back to (wave propagation) physics and electromagnetics, a message that I very much subscribe to: the “y=Hx+w” models we typically use in information and communication theory are in many situations rather oversimplified. Speculations included the use of super-directivity, antenna coupling and more… It will be interesting to see where this leads, but at any rate, it is interesting fundamental physics.

There were also lots of other (non-Massive MIMO) interesting things: UAV connectivity, sparsity… and a great deal of questions and discussion on how machine learning could be leveraged, more about that at a later point in time.

News – commentary – mythbusting