Field Tests of FDD Massive MIMO

Frequency-division duplex (FDD) operation of Massive MIMO in LTE is the topic of two press releases from January 2017. The first press release describes a joint field test carried out by ZTE and China Telecom. It claims three-fold improvements in per-cell spectral efficiency using standard LTE devices, but no further details are given. The second press release describes a field verification carried out by Huawei and China Unicom. The average data rate was 87 Mbit/s per user over a 20 MHz channel and was achieved using commercial LTE devices. This corresponds to a spectral efficiency of 4.36 bit/s/Hz per user. A sum rate of 697 Mbit/s is also mentioned, from which one could guess that eight users were multiplexed (87•8=696).

Image source: Huawei

There are no specific details of the experimental setup or implementation in any of these press releases, so we cannot tell how well the systems perform compared to a baseline TDD Massive MIMO setup. Maybe this is just a rebranding of the FDD multiuser MIMO functionality in LTE, evolved with a few extra antenna ports. It is nonetheless exciting to see that several major telecom companies want to associate themselves with the Massive MIMO technology and hopefully it will result in something revolutionary in the years to come.

Efficient FDD implementation of multiuser MIMO is a longstanding challenge. The reason is the difficulty in estimating channels and feeding back accurate channel state information (CSI) in a resource-efficient manner. Many researchers have proposed methods to exploit channel parameterizations, such as angles and spatial correlation, to simplify the CSI acquisition. This might be sufficient to achieve an array gain, but the ability to also mitigate interuser interference is less certain and remains to be demonstrated experimentally. Since 85% of the LTE networks use FDD, we have previously claimed that making Massive MIMO work well in FDD is critical for the practical success and adoption of the technology.

We hope to see more field trials of Massive MIMO in FDD, along with details of the measurement setups and evaluations of which channel acquisition schemes that are suitable in practice. Will FDD Massive MIMO be exclusive for static users, whose channels are easily estimated, or can anyone benefit from it in 5G?

Update: Blue Danube Systems has released a press release that is also describing trials of FDD Massive MIMO as well. Many companies apparently want to be “first” with this technology for LTE.

More Bandwidth Requires More Power or Antennas

The main selling point of millimeter-wave communications is the abundant bandwidth available in such frequency bands; for example, 2 GHz of bandwidth instead of 20 MHz as in conventional cellular networks. The underlying argument is that the use of much wider bandwidths immediately leads to much higher capacities, in terms of bit/s, but the reality is not that simple.

To look into this,  consider a communication system operating over a bandwidth of $B$ Hz. By assuming an additive white Gaussian noise channel, the capacity becomes

     $$ C = B \log_2 \left(1+\frac{P \beta}{N_0 B} \right)$$

where $P$ W is the transmit power, $\beta$ is the channel gain, and $N_0$ W/Hz is the power spectral density of the noise. The term $(P \beta)/(N_0 B)$ inside the logarithm is referred to as the signal-to-noise ratio (SNR).

Since the bandwidth $B$ appears in front of the logarithm, it might seem that the capacity grows linearly with the bandwidth. This is not the case since also the noise term $N_0 B$ in the SNR also grows linearly with the bandwidth. This fact is illustrated by Figure 1 below, where we consider a system that achieves an SNR of 0 dB at a reference bandwidth of 20 MHz. As we increase the bandwidth towards 2 GHz, the capacity grows only modestly. Despite the 100 times more bandwidth, the capacity only improves by $1.44\times$, which is far from the $100\times$ that a linear increase would give.

Figure 1: Capacity as a function of the bandwidth, for a system with an SNR of 0 dB over a reference bandwidth of 20 MHz. The transmit power is fixed.

The reason for this modest capacity growth is the fact that the SNR reduces inversely proportional to the bandwidth. One can show that

     $$ C \to \frac{P \beta}{N_0}\log_2(e) \quad \textrm{as} \,\, B \to \infty.$$

The convergence to this limit is seen in Figure 1 and is relatively fast since $\log_2(1+x) \approx x \log_2(e)$ for $0 \leq x \leq 1$.

To achieve a linear capacity growth, we need to keep the SNR $(P \beta)/(N_0 B)$ fixed as the bandwidth increases. This can be achieved by increasing the transmit power $P$ proportionally to the bandwidth, which entails using $100\times$ more power when operating over a $100\times$ wider bandwidth. This might not be desirable in practice, at least not for battery-powered devices.

An alternative is to use beamforming to improve the channel gain. In a Massive MIMO system, the effective channel gain is $\beta = \beta_1 M$, where $M$ is the number of antennas and $\beta_1$ is the gain of a single-antenna channel. Hence, we can increase the number of antennas proportionally to the bandwidth to keep the SNR fixed.

Figure 2: Capacity as a function of the bandwidth, for a system with an SNR of 0 dB over a reference bandwidth of 20 MHz with one antenna. The transmit power (or the number of antennas) is either fixed or grows proportionally to the bandwidth.

Figure 2 considers the same setup as in Figure 1, but now we also let either the transmit power or the number of antennas grow proportionally to the bandwidth. In both cases, we achieve a capacity that grows proportionally to the bandwidth, as we initially hoped for.

In conclusion, to make efficient use of more bandwidth we require more transmit power or more antennas at the transmitter and/or receiver. It is worth noting that these requirements are purely due to the increase in bandwidth. In addition, for any given bandwidth, the operation at millimeter-wave frequencies requires much more transmit power and/or more antennas (e.g., additional constant-gain antennas or one constant-aperture antenna) just to achieve the same SNR as in a system operating at conventional frequencies below 5 GHz.

Massive MIMO Trials in LTE Networks

Massive MIMO is often mentioned as a key 5G technology, but could it also be exploited in currently standardized LTE networks? The ZTE-Telefónica trials that were initiated in October 2016 shows that this is indeed possible. The press release from late last year describes the first results. For example, the trial showed improvements in network capacity and cell-edge data rates of up to six times, compared to traditional LTE.

The Massive MIMO blog has talked with Javier Lorca Hernando at Telefónica to get further details. The trials were carried out at the Telefónica headquarters in Madrid. A base station with 128 antenna ports was deployed at the rooftop of one of their buildings and the users were located in one floor of the central building, approximately 100 m from the base station. The users basically had cell-edge conditions, due to the metallized glass and multiple metallic constructions surrounding them.

The uplink and downlink data transmissions were carried out in the 2.6 GHz band. Typical Massive MIMO time-division duplex (TDD) operation was considered, where the uplink detection and downlink precoding is based on uplink pilots and channel reciprocity. The existing LTE sounding reference signals (SRSs) were used as uplink pilots. The reciprocity-based precoding was implemented by using LTE’s transmission mode 8 (TM8),  which supports any type of precoding.  Downlink pilots were used for link adaptation and demodulation purposes.

It is great to see that Massive MIMO can be also implemented in LTE systems. In this trial, the users were static and relatively few, but it will be exciting to see if the existing LTE reference signals will also enable Massive MIMO communications for a multitude of mobile users!

Update: ZTE has carried out similar experiments in cooperation with Smartfren in Indonesia. Additional field trials are mentioned in the comments to this post.

Upside-Down World

The main track for 5G seems to be FDD for “old bands” below 3 GHz and TDD for “new bands” above 3 GHz (particularly mmWave frequencies). But physics advices us to the opposite:

  • At lower frequencies, larger areas are covered, thus most connections are likely to experience non-line-of-sight propagation. Since channel coherence is large (scales inverse proportionally to the Doppler), there is room for many terminals to transmit uplink pilots from which the base station consequently can obtain CSI. Reciprocity-based beamforming in TDD operation is scalable with respect to the number of base station antennas and delivers great value.
  • As the carrier frequency is increased, the coverage area shrinks; connections are more and more likely to experience line-of-sight propagation. At mmWave frequencies, all connections are either line-of-sight, or consist of a small number of reflected components. Then the channel can be parameterized with only few angular parameters; FDD operation with appropriate flavors of beam tracking may work satisfactorily. Reciprocity certainly would be desirable in this case too, but may not be necessary for the system to function.

Physics has given us the reciprocity principle. It should be exploited in wireless system design.

Channel Hardening Makes Fading Channels Behave as Deterministic

One of the main impairments in wireless communications is small-scale channel fading. This refers to random fluctuations in the channel gain, which are caused by microscopic changes in the propagation environments. The fluctuations make the channel unreliable, since occasionally the channel gain is very small and the transmitted data is then received in error.

The diversity achieved by sending a signal over multiple channels with independent realizations is key to combating small-scale fading. Spatial diversity is particularly attractive, since it can be obtained by simply having multiple antennas at the transmitter or the receiver. Suppose the probability of a bad channel gain realization is p. If we have M antennas with independent channel gains, then the risk that all of them are bad is pM. For example, with p=0.1, there is a 10% risk of getting a bad channel in a single-antenna system and a 0.000001% risk in an 8-antenna system. This shows that just a few antennas can be sufficient to greatly improve reliability.

In Massive MIMO systems, with a “massive” number of antennas at the base station, the spatial diversity also leads to something called “channel hardening”. This terminology was used already in a paper from 2004:

M. Hochwald, T. L. Marzetta, and V. Tarokh, “Multiple-antenna channel hardening and its implications for rate feedback and scheduling,” IEEE Transactions on Information Theory, vol. 50, no. 9, pp. 1893–1909, 2004.

In short, channel hardening means that a fading channel behaves as if it was a non-fading channel. The randomness is still there but its impact on the communication is negligible. In the 2004 paper, the hardening is measured by dividing the instantaneous supported data rate with the fading-averaged data rate. If the relative fluctuations are small, then the channel has hardened.

Since Massive MIMO systems contain random interference, it is usually the hardening of the channel that the desired signal propagates over that is studied. If the channel is described by a random M-dimensional vector h, then the ratio ||h||2/E{||h||2} between the instantaneous channel gain and its average is considered. If the fluctuations of the ratio are small, then there is channel hardening. With an independent Rayleigh fading channel, the variance of the ratio reduces with the number of antennas as 1/M. The intuition is that the channel fluctuations average out over the antennas. A detailed analysis is available in a recent paper.

The variance of ||h||2/E{||h||2} decays as 1/M for independent Rayleigh fading channels.

The figure above shows how the variance of ||h||2/E{||h||2} decays with the number of antennas. The convergence towards zero is gradual and so is the channel hardening effect. I personally think that you need at least M=50 to truly benefit from channel hardening.

Channel hardening has several practical implications. One is the improved reliability of having a nearly deterministic channel, which results in lower latency. Another is the lack of scheduling diversity; that is, one cannot schedule users when their ||h||2 are unusually large, since the fluctuations are small. There is also little to gain from estimating the current realization of ||h||2, since it is relatively close to its average value. This can alleviate the need for downlink pilots in Massive MIMO.

Pilot Contamination: an Ultimate Limitation?

Many misconceptions float around about the pilot contamination phenomenon. While existent in any multi-cellular system, its effect tends to be particularly pronounced in Massive MIMO due to the presence of coherent interference, that scales proportionally to the coherent beamforming gain. (Chapter 4 in Fundamentals of Massive MIMO gives the details.)

A good system design definitely must not ignore pilot interference. While it is easily removed “on the average” through greater-than-one reuse, the randomness present in wireless communications – especially the shadow fading – will occasionally cause a few terminals to be severely hit by pilot contamination and bring down their performance. This is problematic whenever we are concerned about the provision of uniformly great service in the cell – and that is one of the principal selling arguments for Massive MIMO. Notwithstanding, the impact of pilot contamination can be reduced significantly in practice by appropriate pilot reuse and judicious power control. (Chapters 5-6 in Fundamentals of Massive MIMO gives many details.)

A more fundamental question is whether pilot contamination could be entirely overcome: Does there exist an upper bound on capacity that saturates as the number of antennas, M, is increased indefinitely? Some have speculated that it cannot; much in line with known capacity upper bounds for cellular base station cooperation. While this question may be of more academic than practical interest, it has long been open except for in some trivial special cases: If the channels of two terminals lie in non-overlapping subspaces and Bayesian channel estimation is used, the channel estimates will not be contaminated; capacity grows as log(M) when M increases without bound.

A much deeper result is established in this recent paper: the subspaces of the channel covariances may overlap, yet capacity grows as log(M). Technically, a Rayleigh fading with spatial correlation is assumed, and the correlation matrices for the contaminating terminals must only be linearly independent as M goes to infinity (exact conditions in the paper). In retrospect, this is not unreasonable given the substantial a priori knowledge exploited by the Bayesian channel estimator, but I found it amazing how weak the required conditions on the correlation matrices are. It remains unclear whether the result generalizes to the case of a growing number of interferers: letting the number of antennas go to infinity and then growing the network is not the same thing as taking an “infinite” (scalable) network and increasing the number of antennas. But this paper elegantly and rigorously answers a long-standing question that has been the subject of much debate in the community – and is a recommended read for anyone interested in the fundamental limits of Massive MIMO.

News – commentary – mythbusting