Category Archives: Beyond 5G

Challenges on the Path to Deployment

Marina Bay Sands Expo and Convention Centre

I attended GLOBECOM in Singapore earlier this week. Since more and more preprints are posted online before conferences, one of the unique features of conferences is to meet other researchers and attend the invited talks and interactive panel discussions. This year I attended the panel “Massive MIMO – Challenges on the Path to Deployment”, which was organized by Ian Wong (National Instruments). The panelists were Amitava Ghosh (Nokia), Erik G. Larsson (Linköping University), Ali Yazdan (Facebook), Raghu Rao (Xilinx), and Shugong Xu (Shanghai University).

No common definition

The first discussion item was the definition of Massive MIMO. While everyone agreed that the main characteristic is that the number of controllable antenna elements is much larger than the number of spatially multiplexed users, the panelists put forward different additional requirements. The industry prefers to call everything with at least 32 antennas for Massive MIMO, irrespective of whether the beamforming is constructed from codebook-based feedback, grid-of-beams, or by exploiting uplink pilots and TDD reciprocity. This demonstrates that Massive MIMO is becoming a marketing term, rather than a well-defined technology. In contrast, academic researchers often have more restrictive definitions; Larsson suggested to specifically include the TDD reciprocity approach in the definition. This is because it is the robust and overhead-efficient way to acquire channel state information (CSI), particularly for non-line-of-sight users; see Myth 3 in our magazine paper. This narrow definition clearly rules out FDD operation, as pointed out by a member of the audience. Personally, I think that any multi-user MIMO implementation that provides performance similar to the TDD-reciprocity-based approach deserves the Massive MIMO branding, but we should not let marketing people use the name for any implementation just because it has many antennas.

Important use cases

The primary use cases for Massive MIMO in sub-6 GHz bands are to improve coverage and spectral efficiency, according to the panel. Great improvements in spectral efficiency have been demonstrated by prototyping, but the panelist agreed that these should be seen as upper bounds. We should not expect to see more than 4x improvements over LTE in the first deployments, according to Ghosh. Larger gains are expected in later releases, but there will continue to be a substantial gap between the average spectral efficiency observed in real cellular networks and the peak spectral efficiency demonstrated by prototypes. Since Massive MIMO achieves its main spectral efficiency gains by multiplexing of users, we might not need a full-blown Massive MIMO implementation today, when there are only one or two simultaneously active users in most cells. However, the networks need to evolve over time as the number of active users per cell grows.

In mmWave bands, the panel agreed that Massive MIMO is mainly for extending coverage. The first large-scale deployments of Massive MIMO will likely aim at delivering fixed wireless broadband access and this must be done in the mmWave bands; there is too little bandwidth in sub-6 GHz bands to deliver data rates that can compete with wired DSL technology.

Initial cost considerations

The deployment cost is a key factor that will limit the first generations of Massive MIMO networks. Despite all the theoretic research that has demonstrated that each antenna branch can be built using low-resolution hardware, when there are many antennas, one should not forget the higher out-of-band radiation that it can lead to. We need to comply with the spectral emission masks – spectrum is incredibly expensive so a licensee cannot accept interference from adjacent bands. For this reason, several panelists from the industry expressed the view that we need to use similar hardware components in Massive MIMO as in contemporary base stations and, therefore, the hardware cost grows linearly with the number of antennas. On the other hand, Larsson pointed out that the futuristic devices that you could see in James Bond movies 10 years ago can now be bought for $100 in any electronic store; hence, when the technology evolves and the economy of scale kicks in, the cost per antenna should not be more than in a smartphone.

A related debate is the one between analog and digital beamforming. Several panelists said that analog and hybrid approaches will be used to cut cost in the first deployments. To rely on analog technology is somewhat weird in an age when everything is becoming digital, but Yazdan pointed out that it is only a temporary solution. The long-term vision is to do fully digital beamforming, even in mmWave bands.

Another implementation challenge that was discussed is the acquisition of CSI for mobile users. This is often brought up as a showstopper since hybrid beamforming methods have such difficulties – it is like looking at a running person in a binocular and trying to follow the movement. This is a challenging issue for any radio technology, but if you rely on uplink pilots for CSI acquisition, it will not be harder than in a system of today. This has also been demonstrated by measurements.

Open problems

The panel was asked to describe the most important open problems in the Massive MIMO area, from a deployment perspective. One obvious issue, which we called the “grand question” in a previous paper, is to provide better support for Massive MIMO in FDD.

The control plane and MAC layer deserve more attention, according to Larsson. Basic functionalities such as ACK/NACK feedback is often ignored by academia, but incredibly important in practice.

The design of “cell-free” densely distributed Massive MIMO systems also deserve further attention. Connecting all existing antennas together to perform joint transmission seems to be the ultimate approach to wireless networks. Although there is no practical implementation yet, Yazdan stressed that deploying such networks might actually be more practical than it seems, given the growing interest in C-RAN technology.

10 years from now

I asked the panel what will be the status of Massive MIMO in 10 years from now. Rao predicted that we will have Massive MIMO everywhere, just as all access point supports small-scale MIMO today. Yazdan believed that the different radio technology (e.g., WiFi, LTE, NR) will converge into one interconnected system, which also allows operators to share hardware. Larsson thinks that over the next decade many more people will have understood the fundamental benefits of utilizing TDD and channel reciprocity, which will have a profound impact on the regulations and spectrum allocation.

Asymptomania

I am borrowing the title from a column written by my advisor two decades ago, in the array signal processing gold rush era.

Asymptotic analysis is a popular tool within statistical signal processing (infinite SNR or number of samples), information theory (infinitely long blocks) and more recently, [massive] MIMO wireless communications (infinitely many antennas).

Some caution is strongly advisable with respect to the latter. In fact, there are compelling reasons to avoid asymptotics in the number of antennas altogether:

  • First, elegant, rigorous and intuitively comprehensible capacity bound formulas are available in closed form.
    The proofs of these expressions use basic random matrix theory, but no asymptotics at all.
  • Second, the notion of “asymptotic limit” or “asymptotic behavior” helps propagate the myth that Massive MIMO somehow relies on asymptotics or “infinite” numbers (or even exorbitantly large numbers) of antennas.
  • Third, many approximate performance results for Massive MIMO (particularly “deterministic equivalents”) based on asymptotic analysis are complicated, require numerical evaluation, and offer little intuitive insight. (And, the verification of their accuracy is a formidable task.)

Finally, and perhaps most importantly, careless use of asymptotic arguments may yield erroneous conclusions. For example in the effective SINRs in multi-cell Massive MIMO, the coherent interference scales with M (number of antennas) – which yields the commonly held misconception that coherent interference is the main impairment caused by pilot contamination. But in fact, in many relevant circumstances it is not (see case studies here): the main impairment for “reasonable” values of M is the reduction in coherent beamforming gain due to reduced estimation quality, which in turn is independent of M.

In addition, the number of antennas beyond which the far-field assumption is violated is actually smaller than what one might first think (problem 3.14).

Does Reciprocity-based Beamforming Break Down at Low SNR?

I hear this being claimed now and then, and it is – of course – both correct and incorrect, at the same time. For the benefit of our readers I take the opportunity to provide some free consulting on the topic.

The important fact is that ergodic capacity can be lower-bounded by a formula of the form log2(1+SINR), where SINR is an “effective SINR” (that includes, among others, the effects of the terminal’s lack of channel knowledge).

This effective SINR scales proportionally to M (number of antennas), for fixed total radiated power.  Compared to a single-antenna system, reciprocity always offers M times better “beamforming gain” regardless of the system’s operating point.  (In fact one of the paradoxes of Massive MIMO is that performance always increases with M, despite the fact that there are “more unknowns to estimate”!) And yes, at very low SNR, the effective SINR is proportional to SNR^2 so reciprocity-based beamforming does “break down”, however, it is still M times better than a single-antenna link (with the same total radiated power). One will also, eventually, reach a point where the capacity bound for omnidirectional transmission (e.g. using a space-time code with appropriate dimension reduction in order to host the required downlink pilots) exceeds that of reciprocity-based beamforming, however, importantly, in this regime the bounds may be loose.

These matters, along with numerous case studies involving actual link budget calculations, are of course rigorously explained in our recent textbook.

Book Review: The 5G Myth

The 5G Myth is the provocative title of a recent book by William Webb, CEO of Weightless SIG, a standard body for IoT/M2M technology. In this book, the author tells a compelling story of a stagnating market for cellular communications, where the customers are generally satisfied with the data rates delivered by the 4G networks. The revenue growth for the mobile network operators (MNOs) is relatively low and also in decay, since the current services are so good that the customers are unwilling to pay more for improved service quality. Although many new wireless services have materialized over the past decade (e.g., video streaming, social networks, video calls, mobile payment, and location-based services), the MNOs have failed to take the leading role in any of them. Instead, the customers make use of external services (e.g., Youtube, Facebook, Skype, Apple Pay, and Google Maps) and only pay the MNOs to deliver the data bits.

The author argues that, under these circumstances, the MNOs have little to gain from investing in 5G technology. Most customers are not asking for any of the envisaged 5G services and will not be inclined to pay extra for them. Webb even compares the situation with the prisoner’s dilemma: the MNOs would benefit the most from not investing in 5G, but they will anyway make investments to avoid a situation where customers switch to a competitor that has invested in 5G. The picture that Webb paints of 5G is rather pessimistic compared to a recent McKinsey report, where the more cost-efficient network operation is described as a key reason for MNOs to invest in 5G.

The author provides a refreshing description of the market for cellular communications, which is important in a time when the research community focuses more on broad 5G visions than on the customers’ actual needs. The book is thus a recommended read for 5G researchers, since we should all ask ourselves if we are developing a technology that tackles the right unsolved problems.

Webb does not only criticize the economic incentives for 5G deployment, but also the 5G visions and technologies in general. The claims are in many cases reasonable; for example, Webb accurately points out that most of the 5G performance goals are overly optimistic and probably only required by a tiny fraction of the user base. He also accurately points out that some “5G applications” already have a wireless solution (e.g., indoor IoT devices connected over WiFi) or should preferably be wired (e.g., ultra-reliable low-latency applications such as remote surgery).

However, it is also in this part of the book that the argumentation sometimes falls short. For example, Webb extrapolates a recent drop in traffic growth to claim that the global traffic volume will reach a plateau in 2027. It is plausible that the traffic growth rate will reduce as a larger and larger fraction of the global population gets access to wireless high-speed connections. But one should bear in mind that we have witnessed an exponential growth in wireless communication traffic for the past century (known as Cooper’s law), so this trend can just as well continue for a few more decades, potentially at a lower growth rate than in the past decade.

Webb also provides a misleading description of multiuser MIMO by claiming that 1) the antenna arrays would be unreasonable large at cellular frequencies and 2) the beamforming requires complicated angular beam-steering. These are two of the myths that we dispelled in the paper “Massive MIMO: Ten myths and one grand question” last year. In fact, testbeds have demonstrated that massive multiuser MIMO is feasible in lower frequency bands, and particularly useful to improve the spectral efficiency through coherent beamforming and spatial multiplexing of users. Reciprocity-based beamforming is a solution for mobile and cell-edge users, for which angular beam-steering indeed is inefficient.

The book is not as pessimistic about the future as it might seem from this review. Webb provides an alternative vision for future wireless communications, where consistent connectivity rather than higher peak rates is the main focus. This coincides with one of the 5G performance goals (i.e., 50 Mbit/s everywhere), but Webb advocates an extensive government-supported deployment of WiFi instead of 5G technology. The use WiFi is not a bad idea; I personally consume relatively little cellular data since WiFi is available at home, at work, and at many public locations in Sweden. However, the cellular services are necessary to realize the dream of consistent connectivity, particularly outdoors and when in motion. This is where a 5G cellular technology that delivers better coverage and higher data rates at the cell edge is highly desirable. Reciprocity-based Massive MIMO seems to be the solution that can deliver this, thus Webb would have had a stronger case if this technology was properly integrated into his vision.

In summary, the combination of 5G Massive MIMO for wide-area coverage and WiFi for local-area coverage might be the way to truly deliver consistent connectivity.

Real-Time Massive MIMO DSP at 50 milliWatt

Colleagues at Lund University presented last month a working circuit that performs, in real time, zero-forcing decoding and precoding of 8 simultaneous terminals with 128 base station antennas, over a 20 MHz bandwidth at a power consumption of about 50 milliWatt.

Impressive, and important.

Granted, this number does not include the complexity of FFTs, sampling rate conversions, and several other (non-insignificant) tasks; however, it does include the bulk of the “Massive-MIMO”-specific digital processing. The design exploits a number of tricks and Massive-MIMO specific properties: diagonal dominance of the channel Gramian, in particular, in sufficiently favorable propagation.

When I started work on Massive MIMO in 2009, the common view held was that the technology would be infeasible because of computational complexity. Particularly, the sheer idea of performing zero-forcing processing in real time was met with, if not ridicule, extreme skepticism. We quickly realized, however, that a reasonable DSP implementation would require no more than some ten Watt. While that is a small number in itself, it turned out to be an overestimate by orders of magnitude!

I spoke with some of the lead inventors of the chip, to learn more about its design. First, the architectures for decoding and for precoding differ a bit. While there is no fundamental reason for why this has to be so, one motivation is the possible use of nonlinear detectors on uplink. (The need for such detectors, for most “typical” cellular Massive MIMO deployments, is not clear – but that is another story.)

Second, and more importantly, the scalability of the design is not clear. While the complexity of the matrix operations themselves scale fast with the dimension, the precision in the arithmetics may have to be increased as well – resulting in a much-faster-than-cubically overall complexity scaling. Since Massive MIMO operates at its best when multiplexing to many tens of terminals (or even thousands, in some applications), significant challenges remain for the future. That is good news for circuit engineers, algorithm designers, and communications theoreticians alike. The next ten years will be exciting.

How Much Performance is Lost by FDD Operation?

There has been a long-standing debate on the relative performance between reciprocity-based (TDD) Massive MIMO and that of FDD solutions based on grid-of-beams, or hybrid beamforming architectures. The matter was, for example, the subject of a heated debate in the 2015 Globecom industry panel “Massive MIMO vs FD-MIMO: Defining the next generation of MIMO in 5G” where on the one hand, the commercial arguments for grid-of-beams solutions were clear, but on the other hand, their real potential for high-performance spatial multiplexing was strongly contested.

While it is known that grid-of-beams solutions perform poorly in isotropic scattering, no prior experimental results are known. This new paper:

Massive MIMO Performance—TDD Versus FDD: What Do Measurements Say?

answers this performance question through the analysis of real Massive MIMO channel measurement data obtained at the 2.6 GHz band. Except for in certain line-of-sight (LOS) environments, the original reciprocity-based TDD Massive MIMO represents the only effective implementation of Massive MIMO at the frequency bands under consideration.

Relative Value of Spectrum

What is more worth? 1 MHz bandwidth at 100 MHz carrier frequency, or 10 MHz bandwidth at 1 GHz carrier? Conventional wisdom has it that higher carrier frequencies are more valuable because “there is more bandwidth there”. In this post, I will explain why that is not entirely correct.

The basic presumption of TDD/reciprocity-based Massive MIMO is that all activity, comprising the transmission of uplink pilots, uplink data and downlink data, takes place inside of a coherence interval:

At fixed mobility, in meter/second, the dimensionality of the coherence interval is proportional to the wavelength, because the Doppler spread is proportional to the carrier frequency.

In a single cell, with max-min fairness power control (for uniform quality-of-service provision), the sum-throughput of Massive MIMO can be computed analytically and is given by the following formula:

In this formula,

  • $B$ = bandwidth in Hertz (split equally between uplink and downlink)
  • $M$ = number of base station antennas
  • $K$ = number of multiplexed terminals
  • $B_c$ = coherence bandwidth in Hertz (independent of carrier frequency)
  • $T_c$ = coherence time in seconds (inversely proportional to carrier frequency)
  • SNR = signal-to-noise ratio (“normalized transmit power”)
  • $\beta_k$ = path loss for the k:th terminal
  • $\gamma_k$ = constant, close to $\beta_k$ with sufficient pilot power

This formula assumes independent Rayleigh fading, but the general conclusions remain under other models.

The factor that pre-multiplies the logarithm depends on $K$.
The pre-log factor is maximized when $K=B_c T_c/2$. The maximal value is $B B_c T_c/8$, which is proportional to $T_c$, and therefore proportional to the wavelength. Due to the multiplication $B T_c$, one can get same pre-log factor using a smaller bandwidth by instead increasing the wavelength, i.e., reducing the carrier frequency. At the same time, assuming appropriate scaling of the number of antennas, $M$, with the number of terminals, $K$, the quantity inside of the logarithm is a constant.

Concluding, the sum spectral efficiency (in b/s/Hz) easily can double for every doubling of the wavelength: a megahertz of bandwidth at 100 MHz carrier is ten times more worth than a megahertz of bandwidth at a 1 GHz carrier. So while there is more bandwidth available at higher carriers, the potential multiplexing gains are correspondingly smaller.

In this example,

all three setups give the same sum-throughput, however, the throughput per terminal is vastly different.