All posts by Emil Björnson

Improving the Cell-Edge Performance

The cellular network that my smartphone connects to normally delivers 10-40 Mbit/s. That is sufficient for video-streaming and other applications that I might use. Unfortunately, I sometimes have poor coverage and then I can barely download emails or make a phone call. That is why I think that providing ubiquitous data coverage is the most important goal for 5G cellular networks. It might also be the most challenging 5G goal, because the area coverage has been an open problem since the first generation of cellular technology.

It is the physics that make it difficult to provide good coverage. The transmitted signals spread out and only a tiny fraction of the transmitted power reaches the receive antenna (e.g., one part of a billion parts). In cellular networks, the received signal power reduces roughly as the propagation distance to the power of four. This results in the following data rate coverage behavior:

Figure 1: Variations in the downlink data rates in an area covered by nine base stations.

This figure considers an area covered by nine base stations, which are located at the middle of the nine peaks. Users that are close to one of the base stations receive the maximum downlink data rate, which in this case is 60 Mbit/s (e.g., spectral efficiency 6 bit/s/Hz over a 10 MHz channel). As a user moves away from a base station, the data rate drops rapidly. At the cell edge, where the user is equally distant from multiple base stations, the rate is nearly zero in this simulation. This is because the received signal power is low as compared to the receiver noise.

What can be done to improve the coverage?

One possibility is to increase the transmit power. This is mathematically equivalent to densifying the network, so that the area covered by each base station is smaller. The figure below shows what happens if we use 100 times more transmit power:

Figure 2: The transmit powers have been increased 100 times as compared to Figure 1.

There are some visible differences as compared to Figure 1. First, the region around the base station that gives 60 Mbit/s is larger. Second, the data rates at the cell edge are slightly improved, but there are still large variations within the area. However, it is no longer the noise that limits the cell-edge rates—it is the interference from other base stations.

The inter-cell interference remains even if we would further increase the transmit power. The reason is that the desired signal power as well as the interfering signal power grow in the same manner at the cell edge. Similar things happen if we densify the network by adding more base stations, as nicely explained in a recent paper by Andrews et al.

Ideally, we would like to increase only the power of the desired signals, while keeping the interference power fixed. This is what transmit precoding from a multi-antenna array can achieve; the transmitted signals from the multiple antennas at the base station add constructively only at the spatial location of the desired user. More precisely, the signal power is proportional to M (the number of antennas), while the interference power caused to other users is independent of M. The following figure shows the data rates when we go from 1 to 100 antennas:

Figure 3: The number of base station antennas has been increased from 1 (as in Figure 1) to 100.

Figure 3 shows that the data rates are increased for all users, but particularly for those at the cell edge. In this simulation, everyone is now guaranteed a minimum data rate of 30 Mbit/s, while 60 Mbit/s is delivered in a large fraction of the coverage area.

In practice, the propagation losses are not only distant-dependent, but also affected by other large-scale effects, such as shadowing. The properties described above remain nevertheless. Coherent precoding from a base station with many antennas can greatly improve the data rates for the cell edge users, since only the desired signal power (and not the interference power) is increased. Higher transmit power or smaller cells will only lead to an interference-limited regime where the cell-edge performance remains to be poor. A practical challenge with coherent precoding is that the base station needs to learn the user channels, but reciprocity-based Massive MIMO provides a scalable solution to that. That is why Massive MIMO is the key technology for delivering ubiquitous connectivity in 5G.

Field Tests of FDD Massive MIMO

Frequency-division duplex (FDD) operation of Massive MIMO in LTE is the topic of two press releases from January 2017. The first press release describes a joint field test carried out by ZTE and China Telecom. It claims three-fold improvements in per-cell spectral efficiency using standard LTE devices, but no further details are given. The second press release describes a field verification carried out by Huawei and China Unicom. The average data rate was 87 Mbit/s per user over a 20 MHz channel and was achieved using commercial LTE devices. This corresponds to a spectral efficiency of 4.36 bit/s/Hz per user. A sum rate of 697 Mbit/s is also mentioned, from which one could guess that eight users were multiplexed (87•8=696).

Image source: Huawei

There are no specific details of the experimental setup or implementation in any of these press releases, so we cannot tell how well the systems perform compared to a baseline TDD Massive MIMO setup. Maybe this is just a rebranding of the FDD multiuser MIMO functionality in LTE, evolved with a few extra antenna ports. It is nonetheless exciting to see that several major telecom companies want to associate themselves with the Massive MIMO technology and hopefully it will result in something revolutionary in the years to come.

Efficient FDD implementation of multiuser MIMO is a longstanding challenge. The reason is the difficulty in estimating channels and feeding back accurate channel state information (CSI) in a resource-efficient manner. Many researchers have proposed methods to exploit channel parameterizations, such as angles and spatial correlation, to simplify the CSI acquisition. This might be sufficient to achieve an array gain, but the ability to also mitigate interuser interference is less certain and remains to be demonstrated experimentally. Since 85% of the LTE networks use FDD, we have previously claimed that making Massive MIMO work well in FDD is critical for the practical success and adoption of the technology.

We hope to see more field trials of Massive MIMO in FDD, along with details of the measurement setups and evaluations of which channel acquisition schemes that are suitable in practice. Will FDD Massive MIMO be exclusive for static users, whose channels are easily estimated, or can anyone benefit from it in 5G?

Update: Blue Danube Systems has released a press release that is also describing trials of FDD Massive MIMO as well. Many companies apparently want to be “first” with this technology for LTE.

More Bandwidth Requires More Power or Antennas

The main selling point of millimeter-wave communications is the abundant bandwidth available in such frequency bands; for example, 2 GHz of bandwidth instead of 20 MHz as in conventional cellular networks. The underlying argument is that the use of much wider bandwidths immediately leads to much higher capacities, in terms of bit/s, but the reality is not that simple.

To look into this,  consider a communication system operating over a bandwidth of $B$ Hz. By assuming an additive white Gaussian noise channel, the capacity becomes

     $$ C = B \log_2 \left(1+\frac{P \beta}{N_0 B} \right)$$

where $P$ W is the transmit power, $\beta$ is the channel gain, and $N_0$ W/Hz is the power spectral density of the noise. The term $(P \beta)/(N_0 B)$ inside the logarithm is referred to as the signal-to-noise ratio (SNR).

Since the bandwidth $B$ appears in front of the logarithm, it might seem that the capacity grows linearly with the bandwidth. This is not the case since also the noise term $N_0 B$ in the SNR also grows linearly with the bandwidth. This fact is illustrated by Figure 1 below, where we consider a system that achieves an SNR of 0 dB at a reference bandwidth of 20 MHz. As we increase the bandwidth towards 2 GHz, the capacity grows only modestly. Despite the 100 times more bandwidth, the capacity only improves by $1.44\times$, which is far from the $100\times$ that a linear increase would give.

Figure 1: Capacity as a function of the bandwidth, for a system with an SNR of 0 dB over a reference bandwidth of 20 MHz. The transmit power is fixed.

The reason for this modest capacity growth is the fact that the SNR reduces inversely proportional to the bandwidth. One can show that

     $$ C \to \frac{P \beta}{N_0}\log_2(e) \quad \textrm{as} \,\, B \to \infty.$$

The convergence to this limit is seen in Figure 1 and is relatively fast since $\log_2(1+x) \approx x \log_2(e)$ for $0 \leq x \leq 1$.

To achieve a linear capacity growth, we need to keep the SNR $(P \beta)/(N_0 B)$ fixed as the bandwidth increases. This can be achieved by increasing the transmit power $P$ proportionally to the bandwidth, which entails using $100\times$ more power when operating over a $100\times$ wider bandwidth. This might not be desirable in practice, at least not for battery-powered devices.

An alternative is to use beamforming to improve the channel gain. In a Massive MIMO system, the effective channel gain is $\beta = \beta_1 M$, where $M$ is the number of antennas and $\beta_1$ is the gain of a single-antenna channel. Hence, we can increase the number of antennas proportionally to the bandwidth to keep the SNR fixed.

Figure 2: Capacity as a function of the bandwidth, for a system with an SNR of 0 dB over a reference bandwidth of 20 MHz with one antenna. The transmit power (or the number of antennas) is either fixed or grows proportionally to the bandwidth.

Figure 2 considers the same setup as in Figure 1, but now we also let either the transmit power or the number of antennas grow proportionally to the bandwidth. In both cases, we achieve a capacity that grows proportionally to the bandwidth, as we initially hoped for.

In conclusion, to make efficient use of more bandwidth we require more transmit power or more antennas at the transmitter and/or receiver. It is worth noting that these requirements are purely due to the increase in bandwidth. In addition, for any given bandwidth, the operation at millimeter-wave frequencies requires much more transmit power and/or more antennas (e.g., additional constant-gain antennas or one constant-aperture antenna) just to achieve the same SNR as in a system operating at conventional frequencies below 5 GHz.

Massive MIMO Trials in LTE Networks

Massive MIMO is often mentioned as a key 5G technology, but could it also be exploited in currently standardized LTE networks? The ZTE-Telefónica trials that were initiated in October 2016 shows that this is indeed possible. The press release from late last year describes the first results. For example, the trial showed improvements in network capacity and cell-edge data rates of up to six times, compared to traditional LTE.

The Massive MIMO blog has talked with Javier Lorca Hernando at Telefónica to get further details. The trials were carried out at the Telefónica headquarters in Madrid. A base station with 128 antenna ports was deployed at the rooftop of one of their buildings and the users were located in one floor of the central building, approximately 100 m from the base station. The users basically had cell-edge conditions, due to the metallized glass and multiple metallic constructions surrounding them.

The uplink and downlink data transmissions were carried out in the 2.6 GHz band. Typical Massive MIMO time-division duplex (TDD) operation was considered, where the uplink detection and downlink precoding is based on uplink pilots and channel reciprocity. The existing LTE sounding reference signals (SRSs) were used as uplink pilots. The reciprocity-based precoding was implemented by using LTE’s transmission mode 8 (TM8),  which supports any type of precoding.  Downlink pilots were used for link adaptation and demodulation purposes.

It is great to see that Massive MIMO can be also implemented in LTE systems. In this trial, the users were static and relatively few, but it will be exciting to see if the existing LTE reference signals will also enable Massive MIMO communications for a multitude of mobile users!

Update: ZTE has carried out similar experiments in cooperation with Smartfren in Indonesia. Additional field trials are mentioned in the comments to this post.

Channel Hardening Makes Fading Channels Behave as Deterministic

One of the main impairments in wireless communications is small-scale channel fading. This refers to random fluctuations in the channel gain, which are caused by microscopic changes in the propagation environments. The fluctuations make the channel unreliable, since occasionally the channel gain is very small and the transmitted data is then received in error.

The diversity achieved by sending a signal over multiple channels with independent realizations is key to combating small-scale fading. Spatial diversity is particularly attractive, since it can be obtained by simply having multiple antennas at the transmitter or the receiver. Suppose the probability of a bad channel gain realization is p. If we have M antennas with independent channel gains, then the risk that all of them are bad is pM. For example, with p=0.1, there is a 10% risk of getting a bad channel in a single-antenna system and a 0.000001% risk in an 8-antenna system. This shows that just a few antennas can be sufficient to greatly improve reliability.

In Massive MIMO systems, with a “massive” number of antennas at the base station, the spatial diversity also leads to something called “channel hardening”. This terminology was used already in a paper from 2004:

M. Hochwald, T. L. Marzetta, and V. Tarokh, “Multiple-antenna channel hardening and its implications for rate feedback and scheduling,” IEEE Transactions on Information Theory, vol. 50, no. 9, pp. 1893–1909, 2004.

In short, channel hardening means that a fading channel behaves as if it was a non-fading channel. The randomness is still there but its impact on the communication is negligible. In the 2004 paper, the hardening is measured by dividing the instantaneous supported data rate with the fading-averaged data rate. If the relative fluctuations are small, then the channel has hardened.

Since Massive MIMO systems contain random interference, it is usually the hardening of the channel that the desired signal propagates over that is studied. If the channel is described by a random M-dimensional vector h, then the ratio ||h||2/E{||h||2} between the instantaneous channel gain and its average is considered. If the fluctuations of the ratio are small, then there is channel hardening. With an independent Rayleigh fading channel, the variance of the ratio reduces with the number of antennas as 1/M. The intuition is that the channel fluctuations average out over the antennas. A detailed analysis is available in a recent paper.

The variance of ||h||2/E{||h||2} decays as 1/M for independent Rayleigh fading channels.

The figure above shows how the variance of ||h||2/E{||h||2} decays with the number of antennas. The convergence towards zero is gradual and so is the channel hardening effect. I personally think that you need at least M=50 to truly benefit from channel hardening.

Channel hardening has several practical implications. One is the improved reliability of having a nearly deterministic channel, which results in lower latency. Another is the lack of scheduling diversity; that is, one cannot schedule users when their ||h||2 are unusually large, since the fluctuations are small. There is also little to gain from estimating the current realization of ||h||2, since it is relatively close to its average value. This can alleviate the need for downlink pilots in Massive MIMO.

Pilot Contamination in a Nutshell

One word that is tightly connected with Massive MIMO is pilot contamination. This is a phenomenon that can appear in any communication system that operates under interference, but in this post, I will describe its basic properties in Massive MIMO.

The base station wants to know the channel responses of its user terminals and these are estimated in the uplink by sending pilot signals. Each pilot signal is corrupted by inter-cell interference and noise when received at the base station. For example, consider the scenario illustrated below where two terminals are transmitting simultaneously, so that the base station receives a superposition of their signals—that is, the desired pilot signal is contaminated.

When estimating the channel from the desired terminal, the base station cannot easily separate the signals from the two terminals. This has two key implications:

First, the interfering signal acts as colored noise that reduces the channel estimation accuracy.

Second, the base station unintentionally estimates a superposition of the channel from the desired terminal and from the interferer. Later, the desired terminal sends payload data and the base station wishes to coherently combine the received signal, using the channel estimate. It will then unintentionally and coherently combine part of the interfering signal as well. This is particularly poisonous when the base station has M antennas, since the array gain from the receive combining increases both the signal power and the interference power proportionally to M. Similarly, when the base station transmits a beamformed downlink signal towards its terminal, it will unintentionally direct some of the signal towards to interferer. This is illustrated below.

In the academic literature, pilot contamination is often studied under the assumption that the interfering terminal sends the same pilot signal as the desired terminal, but in practice any non-orthogonal interfering signal will cause the two effects described above.

Which Technology Can Give Greater Value?

The IEEE GLOBECOM conference, held in Washington D.C. this week, featured many good presentations and exhibitions. One well-attended event was the industry panel “Millimeter Wave vs. Below 5 GHz Massive MIMO: Which Technology Can Give Greater Value?“, organized by Thomas Marzetta and Robert Heath. They invited one team of Millimeter Wave proponents (Theodore Rappaport, Kei Sakaguchi, Charlie Zhang) and one team of Massive MIMO proponents (Chih-Lin I, Erik G. Larsson, Liesbet Van der Perre) to debate the pros and cons of the two 5G technologies.

img_7332

For millimeter wave, the huge bandwidth was identified as the key benefit. Rappaport predicted that 30 GHz of bandwidth would be available in 5 years time, while other panelists made a more conservative prediction of 15-20 GHz in 10 years time. With such a huge bandwidth, a spectral efficiency of 1 bit/s/Hz is sufficient for an access point to deliver tens of Gbit/s to a single user. The panelists agreed that much work remains on millimeter wave channel modeling and the design of circuits for that can deliver the theoretical performance without huge losses. The lack of robustness towards blockage and similar propagation phenomena is also a major challenge.

For Massive MIMO, the straightforward support of user mobility, multiplexing of many users, and wide-area coverage were mentioned as key benefits. A 10x-20x gain in per-cell spectral efficiency, with performance guarantees for every user, was another major factor. Since these gains come from spatial multiplexing of users, rather than increasing the spectral efficiency per user, a large number of users are required to achieve these gains in practice. With a small number of users, the Massive MIMO gains are modest, so it might not be a technology to deploy everywhere. Another drawback is the limited amount of spectrum in the range below 5 GHz, which limits the peak data rates that can be achieved per user. The technology can deliver tens of Mbit/s, but maybe not any Gbit/s per user.

Although the purpose of the panel was to debate the two 5G candidate technologies, I believe that the panelists agree that these technologies have complementary benefits. Today, you connect to WiFi when it is available and switch to cellular when the WiFi network cannot support you. Similarly, I imagine a future where you will enjoy the great data rates offered by millimeter wave, when you are covered by such an access point. Your device will then switch seamlessly to a Massive MIMO network, operating below 5 GHz, to guarantee ubiquitous connectivity when you are in motion or not covered by any millimeter wave access points.