Category Archives: Beyond 5G

Does Reciprocity-based Beamforming Break Down at Low SNR?

I hear this being claimed now and then, and it is – of course – both correct and incorrect, at the same time. For the benefit of our readers I take the opportunity to provide some free consulting on the topic.

The important fact is that ergodic capacity can be lower-bounded by a formula of the form log2(1+SINR), where SINR is an “effective SINR” (that includes, among others, the effects of the terminal’s lack of channel knowledge).

This effective SINR scales proportionally to M (number of antennas), for fixed total radiated power.  Compared to a single-antenna system, reciprocity always offers M times better “beamforming gain” regardless of the system’s operating point.  (In fact one of the paradoxes of Massive MIMO is that performance always increases with M, despite the fact that there are “more unknowns to estimate”!) And yes, at very low SNR, the effective SINR is proportional to SNR^2 so reciprocity-based beamforming does “break down”, however, it is still M times better than a single-antenna link (with the same total radiated power). One will also, eventually, reach a point where the capacity bound for omnidirectional transmission (e.g. using a space-time code with appropriate dimension reduction in order to host the required downlink pilots) exceeds that of reciprocity-based beamforming, however, importantly, in this regime the bounds may be loose.

These matters, along with numerous case studies involving actual link budget calculations, are of course rigorously explained in our recent textbook.

Book Review: The 5G Myth

The 5G Myth is the provocative title of a recent book by William Webb, CEO of Weightless SIG, a standard body for IoT/M2M technology. In this book, the author tells a compelling story of a stagnating market for cellular communications, where the customers are generally satisfied with the data rates delivered by the 4G networks. The revenue growth for the mobile network operators (MNOs) is relatively low and also in decay, since the current services are so good that the customers are unwilling to pay more for improved service quality. Although many new wireless services have materialized over the past decade (e.g., video streaming, social networks, video calls, mobile payment, and location-based services), the MNOs have failed to take the leading role in any of them. Instead, the customers make use of external services (e.g., Youtube, Facebook, Skype, Apple Pay, and Google Maps) and only pay the MNOs to deliver the data bits.

The author argues that, under these circumstances, the MNOs have little to gain from investing in 5G technology. Most customers are not asking for any of the envisaged 5G services and will not be inclined to pay extra for them. Webb even compares the situation with the prisoner’s dilemma: the MNOs would benefit the most from not investing in 5G, but they will anyway make investments to avoid a situation where customers switch to a competitor that has invested in 5G. The picture that Webb paints of 5G is rather pessimistic compared to a recent McKinsey report, where the more cost-efficient network operation is described as a key reason for MNOs to invest in 5G.

The author provides a refreshing description of the market for cellular communications, which is important in a time when the research community focuses more on broad 5G visions than on the customers’ actual needs. The book is thus a recommended read for 5G researchers, since we should all ask ourselves if we are developing a technology that tackles the right unsolved problems.

Webb does not only criticize the economic incentives for 5G deployment, but also the 5G visions and technologies in general. The claims are in many cases reasonable; for example, Webb accurately points out that most of the 5G performance goals are overly optimistic and probably only required by a tiny fraction of the user base. He also accurately points out that some “5G applications” already have a wireless solution (e.g., indoor IoT devices connected over WiFi) or should preferably be wired (e.g., ultra-reliable low-latency applications such as remote surgery).

However, it is also in this part of the book that the argumentation sometimes falls short. For example, Webb extrapolates a recent drop in traffic growth to claim that the global traffic volume will reach a plateau in 2027. It is plausible that the traffic growth rate will reduce as a larger and larger fraction of the global population gets access to wireless high-speed connections. But one should bear in mind that we have witnessed an exponential growth in wireless communication traffic for the past century (known as Cooper’s law), so this trend can just as well continue for a few more decades, potentially at a lower growth rate than in the past decade.

Webb also provides a misleading description of multiuser MIMO by claiming that 1) the antenna arrays would be unreasonable large at cellular frequencies and 2) the beamforming requires complicated angular beam-steering. These are two of the myths that we dispelled in the paper “Massive MIMO: Ten myths and one grand question” last year. In fact, testbeds have demonstrated that massive multiuser MIMO is feasible in lower frequency bands, and particularly useful to improve the spectral efficiency through coherent beamforming and spatial multiplexing of users. Reciprocity-based beamforming is a solution for mobile and cell-edge users, for which angular beam-steering indeed is inefficient.

The book is not as pessimistic about the future as it might seem from this review. Webb provides an alternative vision for future wireless communications, where consistent connectivity rather than higher peak rates is the main focus. This coincides with one of the 5G performance goals (i.e., 50 Mbit/s everywhere), but Webb advocates an extensive government-supported deployment of WiFi instead of 5G technology. The use WiFi is not a bad idea; I personally consume relatively little cellular data since WiFi is available at home, at work, and at many public locations in Sweden. However, the cellular services are necessary to realize the dream of consistent connectivity, particularly outdoors and when in motion. This is where a 5G cellular technology that delivers better coverage and higher data rates at the cell edge is highly desirable. Reciprocity-based Massive MIMO seems to be the solution that can deliver this, thus Webb would have had a stronger case if this technology was properly integrated into his vision.

In summary, the combination of 5G Massive MIMO for wide-area coverage and WiFi for local-area coverage might be the way to truly deliver consistent connectivity.

Real-Time Massive MIMO DSP at 50 milliWatt

Colleagues at Lund University presented last month a working circuit that performs, in real time, zero-forcing decoding and precoding of 8 simultaneous terminals with 128 base station antennas, over a 20 MHz bandwidth at a power consumption of about 50 milliWatt.

Impressive, and important.

Granted, this number does not include the complexity of FFTs, sampling rate conversions, and several other (non-insignificant) tasks; however, it does include the bulk of the “Massive-MIMO”-specific digital processing. The design exploits a number of tricks and Massive-MIMO specific properties: diagonal dominance of the channel Gramian, in particular, in sufficiently favorable propagation.

When I started work on Massive MIMO in 2009, the common view held was that the technology would be infeasible because of computational complexity. Particularly, the sheer idea of performing zero-forcing processing in real time was met with, if not ridicule, extreme skepticism. We quickly realized, however, that a reasonable DSP implementation would require no more than some ten Watt. While that is a small number in itself, it turned out to be an overestimate by orders of magnitude!

I spoke with some of the lead inventors of the chip, to learn more about its design. First, the architectures for decoding and for precoding differ a bit. While there is no fundamental reason for why this has to be so, one motivation is the possible use of nonlinear detectors on uplink. (The need for such detectors, for most “typical” cellular Massive MIMO deployments, is not clear – but that is another story.)

Second, and more importantly, the scalability of the design is not clear. While the complexity of the matrix operations themselves scale fast with the dimension, the precision in the arithmetics may have to be increased as well – resulting in a much-faster-than-cubically overall complexity scaling. Since Massive MIMO operates at its best when multiplexing to many tens of terminals (or even thousands, in some applications), significant challenges remain for the future. That is good news for circuit engineers, algorithm designers, and communications theoreticians alike. The next ten years will be exciting.

How Much Performance is Lost by FDD Operation?

There has been a long-standing debate on the relative performance between reciprocity-based (TDD) Massive MIMO and that of FDD solutions based on grid-of-beams, or hybrid beamforming architectures. The matter was, for example, the subject of a heated debate in the 2015 Globecom industry panel “Massive MIMO vs FD-MIMO: Defining the next generation of MIMO in 5G” where on the one hand, the commercial arguments for grid-of-beams solutions were clear, but on the other hand, their real potential for high-performance spatial multiplexing was strongly contested.

While it is known that grid-of-beams solutions perform poorly in isotropic scattering, no prior experimental results are known. This new paper:

Massive MIMO Performance—TDD Versus FDD: What Do Measurements Say?

answers this performance question through the analysis of real Massive MIMO channel measurement data obtained at the 2.6 GHz band. Except for in certain line-of-sight (LOS) environments, the original reciprocity-based TDD Massive MIMO represents the only effective implementation of Massive MIMO at the frequency bands under consideration.

Relative Value of Spectrum

What is more worth? 1 MHz bandwidth at 100 MHz carrier frequency, or 10 MHz bandwidth at 1 GHz carrier? Conventional wisdom has it that higher carrier frequencies are more valuable because “there is more bandwidth there”. In this post, I will explain why that is not entirely correct.

The basic presumption of TDD/reciprocity-based Massive MIMO is that all activity, comprising the transmission of uplink pilots, uplink data and downlink data, takes place inside of a coherence interval:

At fixed mobility, in meter/second, the dimensionality of the coherence interval is proportional to the wavelength, because the Doppler spread is proportional to the carrier frequency.

In a single cell, with max-min fairness power control (for uniform quality-of-service provision), the sum-throughput of Massive MIMO can be computed analytically and is given by the following formula:

In this formula,

  • $B$ = bandwidth in Hertz (split equally between uplink and downlink)
  • $M$ = number of base station antennas
  • $K$ = number of multiplexed terminals
  • $B_c$ = coherence bandwidth in Hertz (independent of carrier frequency)
  • $T_c$ = coherence time in seconds (inversely proportional to carrier frequency)
  • SNR = signal-to-noise ratio (“normalized transmit power”)
  • $\beta_k$ = path loss for the k:th terminal
  • $\gamma_k$ = constant, close to $\beta_k$ with sufficient pilot power

This formula assumes independent Rayleigh fading, but the general conclusions remain under other models.

The factor that pre-multiplies the logarithm depends on $K$.
The pre-log factor is maximized when $K=B_c T_c/2$. The maximal value is $B B_c T_c/8$, which is proportional to $T_c$, and therefore proportional to the wavelength. Due to the multiplication $B T_c$, one can get same pre-log factor using a smaller bandwidth by instead increasing the wavelength, i.e., reducing the carrier frequency. At the same time, assuming appropriate scaling of the number of antennas, $M$, with the number of terminals, $K$, the quantity inside of the logarithm is a constant.

Concluding, the sum spectral efficiency (in b/s/Hz) easily can double for every doubling of the wavelength: a megahertz of bandwidth at 100 MHz carrier is ten times more worth than a megahertz of bandwidth at a 1 GHz carrier. So while there is more bandwidth available at higher carriers, the potential multiplexing gains are correspondingly smaller.

In this example,

all three setups give the same sum-throughput, however, the throughput per terminal is vastly different.

Extreme Massive MIMO

Suppose extra antennas and RF chains came at no material cost. How large an array could eventually be useful, and would power consumption eventually render “extreme Massive MIMO” infeasible?

I have argued before that in a mobile access environment, no more than a few hundred of antennas per base station will be useful. In an environment without significant mobility, however, the answer is different. In [1, Sec. 6.1], one case study establishes the feasibility of providing (fixed) wireless broadband service to 3000 homes, using a single isolated base station with 3200 antennas (zero-forcing processing and max-min power control). The power consumption of the associated digital signal processing is estimated in [1, homework #6.6] to less than 500 Watt. The service of this many terminals is enabled by the long channel coherence (50 ms in the example).

Is this as massive as MIMO could ever get? Perhaps not. Conceivably, there will be environments with even larger channel coherence. Consider, for example, an outdoor city square with no cars or other traffic – hence no significant mobility. Eventually only measurements can determine the channel coherence, but assuming for the sake of argument 200 ms by 400 kHz, gives room for training of 40,000 terminals (assuming no more than 50% of resources are spent on training). Multiplexing these terminals would require at least 40,000 antennas, which would, at 3 GHz and half wavelength-spacing, occupy an area of 10 x 10 meters (say with a rectangular array for the sake of argument) – easily integrated onto the face of a skyscraper.

  • What gross rate would the base station offer? Assuming, conservatively, 1 bit/s/Hz spectral efficiency (with the usual uniform-service-for-all design), the gross rate in a 25 MHz bandwidth would amount to 1 Tbit/s.
  • How much power would the digital processing require? A back-of-the envelope calculation along the lines of the homework cited above suggests some 15 kW – the equivalent of a few domestic space heaters (I will return to the “energy efficiency” hype later on this blog).
  • How much transmit power is required? The exact value will depend on the coverage area, but to appreciate the order of magnitude, observe that when doubling the number of antennas, the array gain is doubled. If, simultaneously, the number of terminals is doubled, then the total radiated power will be independent of the array size. Hence, transmit power is small compared to the power required for processing.

Is this science fiction or will we be seeing this application in the future? The application is fully feasible, with today’s circuit technology, and does not violate known physical or information theoretic constraints. Machine-to-machine, IoT, or perhaps virtual-reality-type applications may eventually create the desirability, or need, to build extreme Massive MIMO.

[1] T. Marzetta, E. G. Larsson, H. Yang, H. Q. Ngo, Fundamentals of Massive MIMO, Cambridge University Press, 2016.

extreme-mimo

Cell-Free Massive MIMO: New Concept

Conventional mobile networks (a.k.a. cellular wireless networks) are based on cellular topologies. With cellular topologies, a land area is divided into cells. Each cell is served by one base station. An interesting question is: shall the future mobile networks continue to have cells? My quick answer is no, cell-free networks should be the way to do in the future!

Future wireless networks have to manage at the same time billions of devices; each needs a high throughput to support many applications such as voice, real-time video, high quality movies, etc. Cellular networks could not handle such huge connections since user terminals at the cell boundary suffer from very high interference, and hence, perform badly. Furthermore, conventional cellular systems are designed mainly for human users. In future wireless networks, machine-type communications such as the Internet of Things, Internet of Everything, Smart X, etc. are expected to play an important role. The main challenge of machine-type communications is scalable and efficient connectivity for billions of devices. Centralized technology with cellular topologies does not seem to be working for such scenarios since each cell can cover a limited number of user terminals. So why not cell-free structures with decentralized technology? Of course, to serve many user terminals and to simplify the signal processing in a distributed manner, massive MIMO technology should be included. The combination between cell-free structure and massive MIMO technology yields the new concept: Cell-Free Massive MIMO.

What is Cell-Free Massive MIMO? Cell-Free Massive MIMO is a system where a massive number access points distributed over a large area coherently serve a massive number of user terminals in the same time/frequency band. Cell-Free Massive MIMO focuses on cellular frequencies. However, millimeter wave bands can be used as a combination with the cellular frequency bands. There are no concepts of cells or cell boundaries here. Of course, specific signal processing is used, see [1] for more details. Cell-Free Massive MIMO is a new concept. It is a new practical, useful, and scalable version of network MIMO (or cooperative multipoint joint processing) [2, 3]. To some extent, Massive MIMO technology based on the favorable propagation and channel hardening properties is used in Cell-Free Massive MIMO.

Cell-Free Massive MIMO is different from distributed Massive MIMO [4]. Both systems use many service antennas in a distributed way to serve many user terminals, but they are not entirely the same. With distributed Massive MIMO, the base station antennas are distributed within each cell, and these antennas only serve user terminals within that cell. By contrast, in Cell-Free Massive MIMO there are no cells. All service antennas coherently serve all user terminals. The figure below compares the structures of Cell-Free Massive MIMO and distributed Massive MIMO.

comami cellfree
Distributed Massive MIMO Cell-Free Massive MIMO

[1] H. Q. Ngo, A. Ashikhmin, H. Yang, E. G. Larsson, and T. L. Marzetta, “Cell-Free Massive MIMO versus Small Cells,” IEEE Trans. Wireless Commun., 2016 submitted for publication. Available: https://arxiv.org/abs/1602.08232

[2] G. Foschini, K. Karakayali, and R. A. Valenzuela, “Coordinating multiple antenna cellular networks to achieve enormous spectral efficiency,” IEE Proc. Commun. , vol. 152, pp. 548–555, Aug. 2006.

[3] E. Björnson, R. Zakhour, D. Gesbert, B. Ottersten, “Cooperative Multicell Precoding: Rate Region Characterization and Distributed Strategies with Instantaneous and Statistical CSI,” IEEE Trans. Signal Process., vol. 58, no. 8, pp. 4298-4310, Aug. 2010.

[4] K. T. Truong and R.W. Heath Jr., “The viability of distributed antennas for massive MIMO systems,” in Proc. Asilomar CSSC, 2013, pp. 1318–1323.