Cellular Multi-User MIMO: A Technology Whose Time has Come

Both the number of devices with wireless connection and the traffic that they generate have steadily grown since the early days of cellular communications. This continuously calls for improvements in the area capacity [bit/s/km2] of the networks. The use of adaptive antenna arrays was identified as a potential capacity-improving technology in the mid-eighties. An early uplink paper was “Optimum combining for indoor radio systems with multiple users” from 1987 by J. Winters at Bell Labs. An early downlink paper was “The performance enhancement of multibeam adaptive base-station antennas for cellular land mobile radio systems” by S. C. Swales et al. from 1990.

The multi-user MIMO concept, then called space-division multiple access (SDMA), was picked up by the industry in the nineties. For example, Ericsson made field-trials with antenna arrays in GSM systems, which were reported in “Adaptive antennas for GSM and TDMA systems” from 1999. ArrayComm filed an SDMA patent in 1991 and made trials in the nineties. In cooperation with the manufacturer Kyocera, this resulted in commercial deployment of SDMA as an overlay to the TDD-based Personal Handy-phone System (PHS).

Trial with a 12-element circular array by ArrayComm, in the late nineties.

 

Given this history, why isn’t multi-user MIMO a key ingredient in current cellular networks? I think there are several answers to this question:

  1. Most cellular networks use FDD spectrum. To acquire the downlink channels, the SDMA research first focused on angle-of-arrival estimation and later on beamforming codebooks. The cellular propagation environments turned out to be far more complicated than such system concepts easily can handle.
  2. The breakthroughs in information theory for multi-user MIMO happened in the early 2000s, thus there was no theoretical framework that the industry could use in the nineties to evaluate and optimize their multiple antenna concepts.
  3. In practice, it has been far easier to increase the area capacity by deploying more base stations and using more spectrum, rather than developing more advanced base station hardware. In current networks, there is typically zero, one or two users per cell active at a time, and then there is little need for multi-user MIMO.

Why is multi-user MIMO considered a key 5G technology? Basically because the three issues described above have now changed substantially. There is a renewed interest in TDD, with successful cellular deployments in Asia and WiFi being used everywhere. Massive MIMO is the refined form of multi-user MIMO, where the TDD operation enables channel estimation in any propagation environment, the many antennas allow for low-complexity signal processing, and the scalable protocols are suitable for large-scale deployments. The technology can nowadays be implemented using power-efficient off-the-shelf radio-frequency transceivers, as demonstrated by testbeds. Massive MIMO builds upon a solid ground of information theory, which shows how to communicate efficiently under practical impairments such as interference and imperfect channel knowledge.

Maybe most importantly, spatial multiplexing is needed to manage the future data traffic growth. This is because deploying many more base stations or obtaining much more spectrum are not viable options if we want to maintain network coverage—small cells at the street-level are easily shadowed by buildings and mm-wave frequency signals do not propagate well though walls. In 5G networks, a typical cellular base station might have tens of active users at a time, which is a sufficient number to benefit from the great spectral efficiency offered by Massive MIMO.

Leave a Reply

Your email address will not be published. Required fields are marked *