Category Archives: Commentary

Improving the Cell-Edge Performance

The cellular network that my smartphone connects to normally delivers 10-40 Mbit/s. That is sufficient for video-streaming and other applications that I might use. Unfortunately, I sometimes have poor coverage and then I can barely download emails or make a phone call. That is why I think that providing ubiquitous data coverage is the most important goal for 5G cellular networks. It might also be the most challenging 5G goal, because the area coverage has been an open problem since the first generation of cellular technology.

It is the physics that make it difficult to provide good coverage. The transmitted signals spread out and only a tiny fraction of the transmitted power reaches the receive antenna (e.g., one part of a billion parts). In cellular networks, the received signal power reduces roughly as the propagation distance to the power of four. This results in the following data rate coverage behavior:

Figure 1: Variations in the downlink data rates in an area covered by nine base stations.

This figure considers an area covered by nine base stations, which are located at the middle of the nine peaks. Users that are close to one of the base stations receive the maximum downlink data rate, which in this case is 60 Mbit/s (e.g., spectral efficiency 6 bit/s/Hz over a 10 MHz channel). As a user moves away from a base station, the data rate drops rapidly. At the cell edge, where the user is equally distant from multiple base stations, the rate is nearly zero in this simulation. This is because the received signal power is low as compared to the receiver noise.

What can be done to improve the coverage?

One possibility is to increase the transmit power. This is mathematically equivalent to densifying the network, so that the area covered by each base station is smaller. The figure below shows what happens if we use 100 times more transmit power:

Figure 2: The transmit powers have been increased 100 times as compared to Figure 1.

There are some visible differences as compared to Figure 1. First, the region around the base station that gives 60 Mbit/s is larger. Second, the data rates at the cell edge are slightly improved, but there are still large variations within the area. However, it is no longer the noise that limits the cell-edge rates—it is the interference from other base stations.

The inter-cell interference remains even if we would further increase the transmit power. The reason is that the desired signal power as well as the interfering signal power grow in the same manner at the cell edge. Similar things happen if we densify the network by adding more base stations, as nicely explained in a recent paper by Andrews et al.

Ideally, we would like to increase only the power of the desired signals, while keeping the interference power fixed. This is what transmit precoding from a multi-antenna array can achieve; the transmitted signals from the multiple antennas at the base station add constructively only at the spatial location of the desired user. More precisely, the signal power is proportional to M (the number of antennas), while the interference power caused to other users is independent of M. The following figure shows the data rates when we go from 1 to 100 antennas:

Figure 3: The number of base station antennas has been increased from 1 (as in Figure 1) to 100.

Figure 3 shows that the data rates are increased for all users, but particularly for those at the cell edge. In this simulation, everyone is now guaranteed a minimum data rate of 30 Mbit/s, while 60 Mbit/s is delivered in a large fraction of the coverage area.

In practice, the propagation losses are not only distant-dependent, but also affected by other large-scale effects, such as shadowing. The properties described above remain nevertheless. Coherent precoding from a base station with many antennas can greatly improve the data rates for the cell edge users, since only the desired signal power (and not the interference power) is increased. Higher transmit power or smaller cells will only lead to an interference-limited regime where the cell-edge performance remains to be poor. A practical challenge with coherent precoding is that the base station needs to learn the user channels, but reciprocity-based Massive MIMO provides a scalable solution to that. That is why Massive MIMO is the key technology for delivering ubiquitous connectivity in 5G.

Field Tests of FDD Massive MIMO

Frequency-division duplex (FDD) operation of Massive MIMO in LTE is the topic of two press releases from January 2017. The first press release describes a joint field test carried out by ZTE and China Telecom. It claims three-fold improvements in per-cell spectral efficiency using standard LTE devices, but no further details are given. The second press release describes a field verification carried out by Huawei and China Unicom. The average data rate was 87 Mbit/s per user over a 20 MHz channel and was achieved using commercial LTE devices. This corresponds to a spectral efficiency of 4.36 bit/s/Hz per user. A sum rate of 697 Mbit/s is also mentioned, from which one could guess that eight users were multiplexed (87•8=696).

Image source: Huawei

There are no specific details of the experimental setup or implementation in any of these press releases, so we cannot tell how well the systems perform compared to a baseline TDD Massive MIMO setup. Maybe this is just a rebranding of the FDD multiuser MIMO functionality in LTE, evolved with a few extra antenna ports. It is nonetheless exciting to see that several major telecom companies want to associate themselves with the Massive MIMO technology and hopefully it will result in something revolutionary in the years to come.

Efficient FDD implementation of multiuser MIMO is a longstanding challenge. The reason is the difficulty in estimating channels and feeding back accurate channel state information (CSI) in a resource-efficient manner. Many researchers have proposed methods to exploit channel parameterizations, such as angles and spatial correlation, to simplify the CSI acquisition. This might be sufficient to achieve an array gain, but the ability to also mitigate interuser interference is less certain and remains to be demonstrated experimentally. Since 85% of the LTE networks use FDD, we have previously claimed that making Massive MIMO work well in FDD is critical for the practical success and adoption of the technology.

We hope to see more field trials of Massive MIMO in FDD, along with details of the measurement setups and evaluations of which channel acquisition schemes that are suitable in practice. Will FDD Massive MIMO be exclusive for static users, whose channels are easily estimated, or can anyone benefit from it in 5G?

Update: Blue Danube Systems has released a press release that is also describing trials of FDD Massive MIMO as well. Many companies apparently want to be “first” with this technology for LTE.

More Bandwidth Requires More Power or Antennas

The main selling point of millimeter-wave communications is the abundant bandwidth available in such frequency bands; for example, 2 GHz of bandwidth instead of 20 MHz as in conventional cellular networks. The underlying argument is that the use of much wider bandwidths immediately leads to much higher capacities, in terms of bit/s, but the reality is not that simple.

To look into this,  consider a communication system operating over a bandwidth of $B$ Hz. By assuming an additive white Gaussian noise channel, the capacity becomes

     $$ C = B \log_2 \left(1+\frac{P \beta}{N_0 B} \right)$$

where $P$ W is the transmit power, $\beta$ is the channel gain, and $N_0$ W/Hz is the power spectral density of the noise. The term $(P \beta)/(N_0 B)$ inside the logarithm is referred to as the signal-to-noise ratio (SNR).

Since the bandwidth $B$ appears in front of the logarithm, it might seem that the capacity grows linearly with the bandwidth. This is not the case since also the noise term $N_0 B$ in the SNR also grows linearly with the bandwidth. This fact is illustrated by Figure 1 below, where we consider a system that achieves an SNR of 0 dB at a reference bandwidth of 20 MHz. As we increase the bandwidth towards 2 GHz, the capacity grows only modestly. Despite the 100 times more bandwidth, the capacity only improves by $1.44\times$, which is far from the $100\times$ that a linear increase would give.

Figure 1: Capacity as a function of the bandwidth, for a system with an SNR of 0 dB over a reference bandwidth of 20 MHz. The transmit power is fixed.

The reason for this modest capacity growth is the fact that the SNR reduces inversely proportional to the bandwidth. One can show that

     $$ C \to \frac{P \beta}{N_0}\log_2(e) \quad \textrm{as} \,\, B \to \infty.$$

The convergence to this limit is seen in Figure 1 and is relatively fast since $\log_2(1+x) \approx x \log_2(e)$ for $0 \leq x \leq 1$.

To achieve a linear capacity growth, we need to keep the SNR $(P \beta)/(N_0 B)$ fixed as the bandwidth increases. This can be achieved by increasing the transmit power $P$ proportionally to the bandwidth, which entails using $100\times$ more power when operating over a $100\times$ wider bandwidth. This might not be desirable in practice, at least not for battery-powered devices.

An alternative is to use beamforming to improve the channel gain. In a Massive MIMO system, the effective channel gain is $\beta = \beta_1 M$, where $M$ is the number of antennas and $\beta_1$ is the gain of a single-antenna channel. Hence, we can increase the number of antennas proportionally to the bandwidth to keep the SNR fixed.

Figure 2: Capacity as a function of the bandwidth, for a system with an SNR of 0 dB over a reference bandwidth of 20 MHz with one antenna. The transmit power (or the number of antennas) is either fixed or grows proportionally to the bandwidth.

Figure 2 considers the same setup as in Figure 1, but now we also let either the transmit power or the number of antennas grow proportionally to the bandwidth. In both cases, we achieve a capacity that grows proportionally to the bandwidth, as we initially hoped for.

In conclusion, to make efficient use of more bandwidth we require more transmit power or more antennas at the transmitter and/or receiver. It is worth noting that these requirements are purely due to the increase in bandwidth. In addition, for any given bandwidth, the operation at millimeter-wave frequencies requires much more transmit power and/or more antennas (e.g., additional constant-gain antennas or one constant-aperture antenna) just to achieve the same SNR as in a system operating at conventional frequencies below 5 GHz.

Upside-Down World

The main track for 5G seems to be FDD for “old bands” below 3 GHz and TDD for “new bands” above 3 GHz (particularly mmWave frequencies). But physics advices us to the opposite:

  • At lower frequencies, larger areas are covered, thus most connections are likely to experience non-line-of-sight propagation. Since channel coherence is large (scales inverse proportionally to the Doppler), there is room for many terminals to transmit uplink pilots from which the base station consequently can obtain CSI. Reciprocity-based beamforming in TDD operation is scalable with respect to the number of base station antennas and delivers great value.
  • As the carrier frequency is increased, the coverage area shrinks; connections are more and more likely to experience line-of-sight propagation. At mmWave frequencies, all connections are either line-of-sight, or consist of a small number of reflected components. Then the channel can be parameterized with only few angular parameters; FDD operation with appropriate flavors of beam tracking may work satisfactorily. Reciprocity certainly would be desirable in this case too, but may not be necessary for the system to function.

Physics has given us the reciprocity principle. It should be exploited in wireless system design.

Pilot Contamination: an Ultimate Limitation?

Many misconceptions float around about the pilot contamination phenomenon. While existent in any multi-cellular system, its effect tends to be particularly pronounced in Massive MIMO due to the presence of coherent interference, that scales proportionally to the coherent beamforming gain. (Chapter 4 in Fundamentals of Massive MIMO gives the details.)

A good system design definitely must not ignore pilot interference. While it is easily removed “on the average” through greater-than-one reuse, the randomness present in wireless communications – especially the shadow fading – will occasionally cause a few terminals to be severely hit by pilot contamination and bring down their performance. This is problematic whenever we are concerned about the provision of uniformly great service in the cell – and that is one of the principal selling arguments for Massive MIMO. Notwithstanding, the impact of pilot contamination can be reduced significantly in practice by appropriate pilot reuse and judicious power control. (Chapters 5-6 in Fundamentals of Massive MIMO gives many details.)

A more fundamental question is whether pilot contamination could be entirely overcome: Does there exist an upper bound on capacity that saturates as the number of antennas, M, is increased indefinitely? Some have speculated that it cannot; much in line with known capacity upper bounds for cellular base station cooperation. While this question may be of more academic than practical interest, it has long been open except for in some trivial special cases: If the channels of two terminals lie in non-overlapping subspaces and Bayesian channel estimation is used, the channel estimates will not be contaminated; capacity grows as log(M) when M increases without bound.

A much deeper result is established in this recent paper: the subspaces of the channel covariances may overlap, yet capacity grows as log(M). Technically, a Rayleigh fading with spatial correlation is assumed, and the correlation matrices for the contaminating terminals must only be linearly independent as M goes to infinity (exact conditions in the paper). In retrospect, this is not unreasonable given the substantial a priori knowledge exploited by the Bayesian channel estimator, but I found it amazing how weak the required conditions on the correlation matrices are. It remains unclear whether the result generalizes to the case of a growing number of interferers: letting the number of antennas go to infinity and then growing the network is not the same thing as taking an “infinite” (scalable) network and increasing the number of antennas. But this paper elegantly and rigorously answers a long-standing question that has been the subject of much debate in the community – and is a recommended read for anyone interested in the fundamental limits of Massive MIMO.

Which Technology Can Give Greater Value?

The IEEE GLOBECOM conference, held in Washington D.C. this week, featured many good presentations and exhibitions. One well-attended event was the industry panel “Millimeter Wave vs. Below 5 GHz Massive MIMO: Which Technology Can Give Greater Value?“, organized by Thomas Marzetta and Robert Heath. They invited one team of Millimeter Wave proponents (Theodore Rappaport, Kei Sakaguchi, Charlie Zhang) and one team of Massive MIMO proponents (Chih-Lin I, Erik G. Larsson, Liesbet Van der Perre) to debate the pros and cons of the two 5G technologies.

img_7332

For millimeter wave, the huge bandwidth was identified as the key benefit. Rappaport predicted that 30 GHz of bandwidth would be available in 5 years time, while other panelists made a more conservative prediction of 15-20 GHz in 10 years time. With such a huge bandwidth, a spectral efficiency of 1 bit/s/Hz is sufficient for an access point to deliver tens of Gbit/s to a single user. The panelists agreed that much work remains on millimeter wave channel modeling and the design of circuits for that can deliver the theoretical performance without huge losses. The lack of robustness towards blockage and similar propagation phenomena is also a major challenge.

For Massive MIMO, the straightforward support of user mobility, multiplexing of many users, and wide-area coverage were mentioned as key benefits. A 10x-20x gain in per-cell spectral efficiency, with performance guarantees for every user, was another major factor. Since these gains come from spatial multiplexing of users, rather than increasing the spectral efficiency per user, a large number of users are required to achieve these gains in practice. With a small number of users, the Massive MIMO gains are modest, so it might not be a technology to deploy everywhere. Another drawback is the limited amount of spectrum in the range below 5 GHz, which limits the peak data rates that can be achieved per user. The technology can deliver tens of Mbit/s, but maybe not any Gbit/s per user.

Although the purpose of the panel was to debate the two 5G candidate technologies, I believe that the panelists agree that these technologies have complementary benefits. Today, you connect to WiFi when it is available and switch to cellular when the WiFi network cannot support you. Similarly, I imagine a future where you will enjoy the great data rates offered by millimeter wave, when you are covered by such an access point. Your device will then switch seamlessly to a Massive MIMO network, operating below 5 GHz, to guarantee ubiquitous connectivity when you are in motion or not covered by any millimeter wave access points.

Extreme Massive MIMO

Suppose extra antennas and RF chains came at no material cost. How large an array could eventually be useful, and would power consumption eventually render “extreme Massive MIMO” infeasible?

I have argued before that in a mobile access environment, no more than a few hundred of antennas per base station will be useful. In an environment without significant mobility, however, the answer is different. In [1, Sec. 6.1], one case study establishes the feasibility of providing (fixed) wireless broadband service to 3000 homes, using a single isolated base station with 3200 antennas (zero-forcing processing and max-min power control). The power consumption of the associated digital signal processing is estimated in [1, homework #6.6] to less than 500 Watt. The service of this many terminals is enabled by the long channel coherence (50 ms in the example).

Is this as massive as MIMO could ever get? Perhaps not. Conceivably, there will be environments with even larger channel coherence. Consider, for example, an outdoor city square with no cars or other traffic – hence no significant mobility. Eventually only measurements can determine the channel coherence, but assuming for the sake of argument 200 ms by 400 kHz, gives room for training of 40,000 terminals (assuming no more than 50% of resources are spent on training). Multiplexing these terminals would require at least 40,000 antennas, which would, at 3 GHz and half wavelength-spacing, occupy an area of 10 x 10 meters (say with a rectangular array for the sake of argument) – easily integrated onto the face of a skyscraper.

  • What gross rate would the base station offer? Assuming, conservatively, 1 bit/s/Hz spectral efficiency (with the usual uniform-service-for-all design), the gross rate in a 25 MHz bandwidth would amount to 1 Tbit/s.
  • How much power would the digital processing require? A back-of-the envelope calculation along the lines of the homework cited above suggests some 15 kW – the equivalent of a few domestic space heaters (I will return to the “energy efficiency” hype later on this blog).
  • How much transmit power is required? The exact value will depend on the coverage area, but to appreciate the order of magnitude, observe that when doubling the number of antennas, the array gain is doubled. If, simultaneously, the number of terminals is doubled, then the total radiated power will be independent of the array size. Hence, transmit power is small compared to the power required for processing.

Is this science fiction or will we be seeing this application in the future? The application is fully feasible, with today’s circuit technology, and does not violate known physical or information theoretic constraints. Machine-to-machine, IoT, or perhaps virtual-reality-type applications may eventually create the desirability, or need, to build extreme Massive MIMO.

[1] T. Marzetta, E. G. Larsson, H. Yang, H. Q. Ngo, Fundamentals of Massive MIMO, Cambridge University Press, 2016.

extreme-mimo