All posts by Emil Björnson

Channel Hardening Makes Fading Channels Behave as Deterministic

One of the main impairments in wireless communications is small-scale channel fading. This refers to random fluctuations in the channel gain, which are caused by microscopic changes in the propagation environments. The fluctuations make the channel unreliable, since occasionally the channel gain is very small and the transmitted data is then received in error.

The diversity achieved by sending a signal over multiple channels with independent realizations is key to combating small-scale fading. Spatial diversity is particularly attractive, since it can be obtained by simply having multiple antennas at the transmitter or the receiver. Suppose the probability of a bad channel gain realization is p. If we have M antennas with independent channel gains, then the risk that all of them are bad is pM. For example, with p=0.1, there is a 10% risk of getting a bad channel in a single-antenna system and a 0.000001% risk in an 8-antenna system. This shows that just a few antennas can be sufficient to greatly improve reliability.

In Massive MIMO systems, with a “massive” number of antennas at the base station, the spatial diversity also leads to something called “channel hardening”. This terminology was used already in a paper from 2004:

M. Hochwald, T. L. Marzetta, and V. Tarokh, “Multiple-antenna channel hardening and its implications for rate feedback and scheduling,” IEEE Transactions on Information Theory, vol. 50, no. 9, pp. 1893–1909, 2004.

In short, channel hardening means that a fading channel behaves as if it was a non-fading channel. The randomness is still there but its impact on the communication is negligible. In the 2004 paper, the hardening is measured by dividing the instantaneous supported data rate with the fading-averaged data rate. If the relative fluctuations are small, then the channel has hardened.

Since Massive MIMO systems contain random interference, it is usually the hardening of the channel that the desired signal propagates over that is studied. If the channel is described by a random M-dimensional vector h, then the ratio ||h||2/E{||h||2} between the instantaneous channel gain and its average is considered. If the fluctuations of the ratio are small, then there is channel hardening. With an independent Rayleigh fading channel, the variance of the ratio reduces with the number of antennas as 1/M. The intuition is that the channel fluctuations average out over the antennas. A detailed analysis is available in a recent paper.

The variance of ||h||2/E{||h||2} decays as 1/M for independent Rayleigh fading channels.

The figure above shows how the variance of ||h||2/E{||h||2} decays with the number of antennas. The convergence towards zero is gradual and so is the channel hardening effect. I personally think that you need at least M=50 to truly benefit from channel hardening.

Channel hardening has several practical implications. One is the improved reliability of having a nearly deterministic channel, which results in lower latency. Another is the lack of scheduling diversity; that is, one cannot schedule users when their ||h||2 are unusually large, since the fluctuations are small. There is also little to gain from estimating the current realization of ||h||2, since it is relatively close to its average value. This can alleviate the need for downlink pilots in Massive MIMO.

Pilot Contamination in a Nutshell

One word that is tightly connected with Massive MIMO is pilot contamination. This is a phenomenon that can appear in any communication system that operates under interference, but in this post, I will describe its basic properties in Massive MIMO.

The base station wants to know the channel responses of its user terminals and these are estimated in the uplink by sending pilot signals. Each pilot signal is corrupted by inter-cell interference and noise when received at the base station. For example, consider the scenario illustrated below where two terminals are transmitting simultaneously, so that the base station receives a superposition of their signals—that is, the desired pilot signal is contaminated.

When estimating the channel from the desired terminal, the base station cannot easily separate the signals from the two terminals. This has two key implications:

First, the interfering signal acts as colored noise that reduces the channel estimation accuracy.

Second, the base station unintentionally estimates a superposition of the channel from the desired terminal and from the interferer. Later, the desired terminal sends payload data and the base station wishes to coherently combine the received signal, using the channel estimate. It will then unintentionally and coherently combine part of the interfering signal as well. This is particularly poisonous when the base station has M antennas, since the array gain from the receive combining increases both the signal power and the interference power proportionally to M. Similarly, when the base station transmits a beamformed downlink signal towards its terminal, it will unintentionally direct some of the signal towards to interferer. This is illustrated below.

In the academic literature, pilot contamination is often studied under the assumption that the interfering terminal sends the same pilot signal as the desired terminal, but in practice any non-orthogonal interfering signal will cause the two effects described above.

Which Technology Can Give Greater Value?

The IEEE GLOBECOM conference, held in Washington D.C. this week, featured many good presentations and exhibitions. One well-attended event was the industry panel “Millimeter Wave vs. Below 5 GHz Massive MIMO: Which Technology Can Give Greater Value?“, organized by Thomas Marzetta and Robert Heath. They invited one team of Millimeter Wave proponents (Theodore Rappaport, Kei Sakaguchi, Charlie Zhang) and one team of Massive MIMO proponents (Chih-Lin I, Erik G. Larsson, Liesbet Van der Perre) to debate the pros and cons of the two 5G technologies.

img_7332

For millimeter wave, the huge bandwidth was identified as the key benefit. Rappaport predicted that 30 GHz of bandwidth would be available in 5 years time, while other panelists made a more conservative prediction of 15-20 GHz in 10 years time. With such a huge bandwidth, a spectral efficiency of 1 bit/s/Hz is sufficient for an access point to deliver tens of Gbit/s to a single user. The panelists agreed that much work remains on millimeter wave channel modeling and the design of circuits for that can deliver the theoretical performance without huge losses. The lack of robustness towards blockage and similar propagation phenomena is also a major challenge.

For Massive MIMO, the straightforward support of user mobility, multiplexing of many users, and wide-area coverage were mentioned as key benefits. A 10x-20x gain in per-cell spectral efficiency, with performance guarantees for every user, was another major factor. Since these gains come from spatial multiplexing of users, rather than increasing the spectral efficiency per user, a large number of users are required to achieve these gains in practice. With a small number of users, the Massive MIMO gains are modest, so it might not be a technology to deploy everywhere. Another drawback is the limited amount of spectrum in the range below 5 GHz, which limits the peak data rates that can be achieved per user. The technology can deliver tens of Mbit/s, but maybe not any Gbit/s per user.

Although the purpose of the panel was to debate the two 5G candidate technologies, I believe that the panelists agree that these technologies have complementary benefits. Today, you connect to WiFi when it is available and switch to cellular when the WiFi network cannot support you. Similarly, I imagine a future where you will enjoy the great data rates offered by millimeter wave, when you are covered by such an access point. Your device will then switch seamlessly to a Massive MIMO network, operating below 5 GHz, to guarantee ubiquitous connectivity when you are in motion or not covered by any millimeter wave access points.

The Dense Urban Information Society

5G cellular networks are supposed to deal with many challenging communication scenarios where today’s cellular networks fall short.  In this post, we have a look at one such scenario, where Massive MIMO is key to overcome the challenges.

The METIS research project has identified twelve test cases for 5G connectivity. One of these is the “Dense urban information society”, which is

“…concerned with the connectivity required at any place and at any time by humans in dense urban environments. We here consider both the traffic between humans and the cloud, and also direct information exchange between humans or with their environment. The particular challenge lies in the fact that users expect the same quality of experience no matter whether they are at their workplace, enjoying leisure activities such as shopping, or being on the move on foot or in a vehicle.”

Source: METIS, deliverable D1.1 “Scenarios, requirements and KPIs for 5G mobile and wireless system

Hence, the challenge is to provide ubiquitous connectivity in urban areas, where there will be massive user loads in the future: up to  200,000 devices per km2 is predicted by METIS. In their test case, each device requests one data packet per minute, which should be transferred within one second. Hence, there is on average up to 200,000/60 = 3,333 users active per km2 at any given time.

Hexagonal cellular network, with adjacent cells having different colors for clarity.

This large number of users is a challenge that Massive MIMO is particularly well-suited for. One of the key benefits of the Massive MIMO technology is the high spectral efficiency that it achieves by spatial multiplexing of tens of user per cell. Suppose, for example, that the cells are deployed in a hexagonal pattern with a base station in each cell center, as illustrated in the figure. How many simultaneously active users will there be per cell in the dense urban information society? That depends on the area of a cell. An inter-site distance (ISD) of 0.25 km is common in contemporary urban deployments. In this case, one can show that the area covered by each cell is √3×ISD2/2 = 0.05 km2.

intersite-distance

The number of active users per cell is then obtained by multiplying the cell area with the user density. Three examples are provided in the table below:

103 users/km2 104 users/km2 105 users/km2
Total number of users per cell 54 540 5400
Average active users per cell 0.9 9 90

Recall that 1/60 of the total number of users are active simultaneously, in the urban information society test case. This gives the numbers in the second row of the table.

From this table, notice that there will be tens of simultaneously active users per cell, when the user density is above 10,000 per km2. This is a number substantially smaller than the 200,000 per km2 predicted by the METIS project. Hence, there will likely be many future urban deployment scenarios with sufficiently many users to benefit from Massive MIMO.

A fraction of these users can (and probably will) be offloaded to WiFi-like networks, maybe operating at mmWave frequencies. But since local-area networks provide only patchy coverage, it is inevitable that many users and devices will rely on the cellular networks to achieve ubiquitous connectivity, with the uniform quality-of-service everywhere.

In summary, Massive MIMO is what we need to realize the dream of ubiquitous connectivity in the dense urban information society.

Cellular Multi-User MIMO: A Technology Whose Time has Come

Both the number of devices with wireless connection and the traffic that they generate have steadily grown since the early days of cellular communications. This continuously calls for improvements in the area capacity [bit/s/km2] of the networks. The use of adaptive antenna arrays was identified as a potential capacity-improving technology in the mid-eighties. An early uplink paper was “Optimum combining for indoor radio systems with multiple users” from 1987 by J. Winters at Bell Labs. An early downlink paper was “The performance enhancement of multibeam adaptive base-station antennas for cellular land mobile radio systems” by S. C. Swales et al. from 1990.

The multi-user MIMO concept, then called space-division multiple access (SDMA), was picked up by the industry in the nineties. For example, Ericsson made field-trials with antenna arrays in GSM systems, which were reported in “Adaptive antennas for GSM and TDMA systems” from 1999. ArrayComm filed an SDMA patent in 1991 and made trials in the nineties. In cooperation with the manufacturer Kyocera, this resulted in commercial deployment of SDMA as an overlay to the TDD-based Personal Handy-phone System (PHS).

Trial with a 12-element circular array by ArrayComm, in the late nineties.

 

Given this history, why isn’t multi-user MIMO a key ingredient in current cellular networks? I think there are several answers to this question:

  1. Most cellular networks use FDD spectrum. To acquire the downlink channels, the SDMA research first focused on angle-of-arrival estimation and later on beamforming codebooks. The cellular propagation environments turned out to be far more complicated than such system concepts easily can handle.
  2. The breakthroughs in information theory for multi-user MIMO happened in the early 2000s, thus there was no theoretical framework that the industry could use in the nineties to evaluate and optimize their multiple antenna concepts.
  3. In practice, it has been far easier to increase the area capacity by deploying more base stations and using more spectrum, rather than developing more advanced base station hardware. In current networks, there is typically zero, one or two users per cell active at a time, and then there is little need for multi-user MIMO.

Why is multi-user MIMO considered a key 5G technology? Basically because the three issues described above have now changed substantially. There is a renewed interest in TDD, with successful cellular deployments in Asia and WiFi being used everywhere. Massive MIMO is the refined form of multi-user MIMO, where the TDD operation enables channel estimation in any propagation environment, the many antennas allow for low-complexity signal processing, and the scalable protocols are suitable for large-scale deployments. The technology can nowadays be implemented using power-efficient off-the-shelf radio-frequency transceivers, as demonstrated by testbeds. Massive MIMO builds upon a solid ground of information theory, which shows how to communicate efficiently under practical impairments such as interference and imperfect channel knowledge.

Maybe most importantly, spatial multiplexing is needed to manage the future data traffic growth. This is because deploying many more base stations or obtaining much more spectrum are not viable options if we want to maintain network coverage—small cells at the street-level are easily shadowed by buildings and mm-wave frequency signals do not propagate well though walls. In 5G networks, a typical cellular base station might have tens of active users at a time, which is a sufficient number to benefit from the great spectral efficiency offered by Massive MIMO.

Are There Any Massive MIMO Books?

9781107175570_200x_fundamentals-of-massive-mimoI regularly get the question “are there any Massive MIMO books?”. So far my answer has always been “no”, but now I can finally give a positive answer.

My colleagues Erik G. Larsson and Hien Quoc Ngo have written a book entitled “Fundamentals of Massive MIMO” together with Thomas L. Marzetta and Hong Yang at Bell Labs, Nokia. The book is published this October/November by Cambridge University Press.

I have read the book and I think it serves as an excellent introduction to the topic. The text is suitable for graduate students, practicing engineers, professors, and doctoral students who would like to learn the basic Massive MIMO concept, results and properties. It also provides a clean introduction to the theoretical tools that are suitable for analyzing the Massive MIMO performance.

I personally intend to use this book as course material for a Master level course on Multiple-antenna communications next year. I recommend other teachers to also consider this possibility!

A preview of the book can be found on Google Books:

Update: Since November 2017, there is another book: “Massive MIMO Networks: Spectral, Energy, and Hardware Efficiency“.

How Much does Massive MIMO Improve the Spectral Efficiency?

It is often claimed in the academic literature that Massive MIMO can greatly improve the spectral efficiency. What does it mean, qualitatively and quantitatively? This is what I will try to explain.

With spectral efficiency, we usually mean the sum spectral efficiency of the transmissions in a cell of a cellular network. It is measured in bit/s/Hz. If you multiply it with the bandwidth, you will get the cell throughput measured in bit/s. Since the bandwidth is a scarce resource, particularly at the frequencies below 5 GHz that are suitable for network coverage, it is highly desirable to improve the cell throughput by increasing the spectral efficiency rather than increasing the bandwidth.

A great way to improve the spectral efficiency is to simultaneously serve many user terminals in the cell, over the same bandwidth, by means of space division multiple access. This is where Massive MIMO is king. There is no doubt that this technology can improve the spectral efficiency. The question is rather “how much?”

Earlier this year, the joint experimental effort by the universities in Bristol and Lund demonstrated an impressive spectral efficiency of 145.6 bit/s/Hz, over a 20 MHz bandwidth in the 3.5 GHz band. The experiment was carried out in a single-cell indoor environment. Their huge spectral efficiency can be compared with 3 bit/s/Hz, which is the IMT Advanced requirement for 4G. The remarkable Massive MIMO gain was achieved by spatial multiplexing of data signals to 22 users using 256-QAM. The raw spectral efficiency is 176 bit/s/Hz, but 17% was lost for practical reasons. You can read more about this measurement campaign here:

http://www.bristol.ac.uk/news/2016/may/5g-wireless-spectrum-efficiency.html

256-QAM is generally not an option in cellular networks, due to the inter-cell interference and unfavorable cell edge conditions. Numerical simulations can, however, predict the practically achievable spectral efficiency. The figure below shows the uplink spectral efficiency for a base station with 200 antennas that serves a varying number of users. Interference from many tiers of neighboring cells is considered. Zero-forcing detection, pilot-based channel estimation, and power control that gives every user 0 dB SNR are assumed. Different curves are shown for different values of τc, which is the number of symbols per channel coherence interval. The curves have several peaks, since the results are optimized over different pilot reuse factors.

Spectral efficiency
Uplink spectral efficiency in a cellular network with 200 base station antennas.

From this simulation figure we observe that the spectral efficiency grows linearly with the number of users, for the first 30-40 users. For larger user numbers, the spectral efficiency saturates due to interference and limited channel coherence. The top value of each curve is in the range from 60 to 110 bit/s/Hz, which are remarkable improvements over the 3 bit/s/Hz of IMT Advanced.

In conclusion, 20x-40x improvements in spectral efficiency over IMT Advanced are what to expect from Massive MIMO.