Category Archives: Commentary

Reproducible Massive MIMO Research

Reproducibility is fundamental to scientific research. If you develop a new algorithm and use simulations/experiments to claim its superiority over prior algorithms, your claims are only credible if other researchers can reproduce and confirm them.

The first step towards reproducibility is to describe the simulation procedure in such detail that another researcher can repeat the simulation, but a major effort is typically needed to reimplement everything. The second step is to make the simulation code publicly available, so that any scientist can review it and easily reproduce the results. While the first step is mandatory for publishing a scientific study, there is a movement towards open science that would make also the second step a common practice.

I understand that some researchers are skeptical towards sharing their simulation code, in fear of losing their competitive advantage towards other research groups. My personal principle is to not share any code until the research study is finished and the results have been accepted for publication in a full-length journal. After that, I think that the society benefits the most if other researcher can focus on improving my and others’ research, instead of spending excessive amount of time on reimplementing known algorithms. I also believe that the primary competitive advantage in research is the know-how and technical insights, while the simulation code is of secondary importance.

On my GitHub page, I have published Matlab code packages that reproduces the simulation results in one book, one book chapter, and more than 15 peer-reviewed articles. Most of these publications are related to MIMO or Massive MIMO. I see many benefits from doing this:

1) It increases the credibility of my research group’s work;

2) I write better code when I know that other people will read it;

3) Other researchers can dedicate their time into developing new improved algorithms and compare them with my baseline implementations;

4) Young scientists may learn how to implement a basic simulation environment by reading the code.

I hope that other Massive MIMO researchers will also make their simulation code publicly available. Maybe you have already done that? In that case, please feel free to write a comment to this post with a link to your code.

Book Review: The 5G Myth

The 5G Myth is the provocative title of a recent book by William Webb, CEO of Weightless SIG, a standard body for IoT/M2M technology. In this book, the author tells a compelling story of a stagnating market for cellular communications, where the customers are generally satisfied with the data rates delivered by the 4G networks. The revenue growth for the mobile network operators (MNOs) is relatively low and also in decay, since the current services are so good that the customers are unwilling to pay more for improved service quality. Although many new wireless services have materialized over the past decade (e.g., video streaming, social networks, video calls, mobile payment, and location-based services), the MNOs have failed to take the leading role in any of them. Instead, the customers make use of external services (e.g., Youtube, Facebook, Skype, Apple Pay, and Google Maps) and only pay the MNOs to deliver the data bits.

The author argues that, under these circumstances, the MNOs have little to gain from investing in 5G technology. Most customers are not asking for any of the envisaged 5G services and will not be inclined to pay extra for them. Webb even compares the situation with the prisoner’s dilemma: the MNOs would benefit the most from not investing in 5G, but they will anyway make investments to avoid a situation where customers switch to a competitor that has invested in 5G. The picture that Webb paints of 5G is rather pessimistic compared to a recent McKinsey report, where the more cost-efficient network operation is described as a key reason for MNOs to invest in 5G.

The author provides a refreshing description of the market for cellular communications, which is important in a time when the research community focuses more on broad 5G visions than on the customers’ actual needs. The book is thus a recommended read for 5G researchers, since we should all ask ourselves if we are developing a technology that tackles the right unsolved problems.

Webb does not only criticize the economic incentives for 5G deployment, but also the 5G visions and technologies in general. The claims are in many cases reasonable; for example, Webb accurately points out that most of the 5G performance goals are overly optimistic and probably only required by a tiny fraction of the user base. He also accurately points out that some “5G applications” already have a wireless solution (e.g., indoor IoT devices connected over WiFi) or should preferably be wired (e.g., ultra-reliable low-latency applications such as remote surgery).

However, it is also in this part of the book that the argumentation sometimes falls short. For example, Webb extrapolates a recent drop in traffic growth to claim that the global traffic volume will reach a plateau in 2027. It is plausible that the traffic growth rate will reduce as a larger and larger fraction of the global population gets access to wireless high-speed connections. But one should bear in mind that we have witnessed an exponential growth in wireless communication traffic for the past century (known as Cooper’s law), so this trend can just as well continue for a few more decades, potentially at a lower growth rate than in the past decade.

Webb also provides a misleading description of multiuser MIMO by claiming that 1) the antenna arrays would be unreasonable large at cellular frequencies and 2) the beamforming requires complicated angular beam-steering. These are two of the myths that we dispelled in the paper “Massive MIMO: Ten myths and one grand question” last year. In fact, testbeds have demonstrated that massive multiuser MIMO is feasible in lower frequency bands, and particularly useful to improve the spectral efficiency through coherent beamforming and spatial multiplexing of users. Reciprocity-based beamforming is a solution for mobile and cell-edge users, for which angular beam-steering indeed is inefficient.

The book is not as pessimistic about the future as it might seem from this review. Webb provides an alternative vision for future wireless communications, where consistent connectivity rather than higher peak rates is the main focus. This coincides with one of the 5G performance goals (i.e., 50 Mbit/s everywhere), but Webb advocates an extensive government-supported deployment of WiFi instead of 5G technology. The use WiFi is not a bad idea; I personally consume relatively little cellular data since WiFi is available at home, at work, and at many public locations in Sweden. However, the cellular services are necessary to realize the dream of consistent connectivity, particularly outdoors and when in motion. This is where a 5G cellular technology that delivers better coverage and higher data rates at the cell edge is highly desirable. Reciprocity-based Massive MIMO seems to be the solution that can deliver this, thus Webb would have had a stronger case if this technology was properly integrated into his vision.

In summary, the combination of 5G Massive MIMO for wide-area coverage and WiFi for local-area coverage might be the way to truly deliver consistent connectivity.

Real-Time Massive MIMO DSP at 50 milliWatt

Colleagues at Lund University presented last month a working circuit that performs, in real time, zero-forcing decoding and precoding of 8 simultaneous terminals with 128 base station antennas, over a 20 MHz bandwidth at a power consumption of about 50 milliWatt.

Impressive, and important.

Granted, this number does not include the complexity of FFTs, sampling rate conversions, and several other (non-insignificant) tasks; however, it does include the bulk of the “Massive-MIMO”-specific digital processing. The design exploits a number of tricks and Massive-MIMO specific properties: diagonal dominance of the channel Gramian, in particular, in sufficiently favorable propagation.

When I started work on Massive MIMO in 2009, the common view held was that the technology would be infeasible because of computational complexity. Particularly, the sheer idea of performing zero-forcing processing in real time was met with, if not ridicule, extreme skepticism. We quickly realized, however, that a reasonable DSP implementation would require no more than some ten Watt. While that is a small number in itself, it turned out to be an overestimate by orders of magnitude!

I spoke with some of the lead inventors of the chip, to learn more about its design. First, the architectures for decoding and for precoding differ a bit. While there is no fundamental reason for why this has to be so, one motivation is the possible use of nonlinear detectors on uplink. (The need for such detectors, for most “typical” cellular Massive MIMO deployments, is not clear – but that is another story.)

Second, and more importantly, the scalability of the design is not clear. While the complexity of the matrix operations themselves scale fast with the dimension, the precision in the arithmetics may have to be increased as well – resulting in a much-faster-than-cubically overall complexity scaling. Since Massive MIMO operates at its best when multiplexing to many tens of terminals (or even thousands, in some applications), significant challenges remain for the future. That is good news for circuit engineers, algorithm designers, and communications theoreticians alike. The next ten years will be exciting.

Relative Value of Spectrum

What is more worth? 1 MHz bandwidth at 100 MHz carrier frequency, or 10 MHz bandwidth at 1 GHz carrier? Conventional wisdom has it that higher carrier frequencies are more valuable because “there is more bandwidth there”. In this post, I will explain why that is not entirely correct.

The basic presumption of TDD/reciprocity-based Massive MIMO is that all activity, comprising the transmission of uplink pilots, uplink data and downlink data, takes place inside of a coherence interval:

At fixed mobility, in meter/second, the dimensionality of the coherence interval is proportional to the wavelength, because the Doppler spread is proportional to the carrier frequency.

In a single cell, with max-min fairness power control (for uniform quality-of-service provision), the sum-throughput of Massive MIMO can be computed analytically and is given by the following formula:

In this formula,

  • $B$ = bandwidth in Hertz (split equally between uplink and downlink)
  • $M$ = number of base station antennas
  • $K$ = number of multiplexed terminals
  • $B_c$ = coherence bandwidth in Hertz (independent of carrier frequency)
  • $T_c$ = coherence time in seconds (inversely proportional to carrier frequency)
  • SNR = signal-to-noise ratio (“normalized transmit power”)
  • $\beta_k$ = path loss for the k:th terminal
  • $\gamma_k$ = constant, close to $\beta_k$ with sufficient pilot power

This formula assumes independent Rayleigh fading, but the general conclusions remain under other models.

The factor that pre-multiplies the logarithm depends on $K$.
The pre-log factor is maximized when $K=B_c T_c/2$. The maximal value is $B B_c T_c/8$, which is proportional to $T_c$, and therefore proportional to the wavelength. Due to the multiplication $B T_c$, one can get same pre-log factor using a smaller bandwidth by instead increasing the wavelength, i.e., reducing the carrier frequency. At the same time, assuming appropriate scaling of the number of antennas, $M$, with the number of terminals, $K$, the quantity inside of the logarithm is a constant.

Concluding, the sum spectral efficiency (in b/s/Hz) easily can double for every doubling of the wavelength: a megahertz of bandwidth at 100 MHz carrier is ten times more worth than a megahertz of bandwidth at a 1 GHz carrier. So while there is more bandwidth available at higher carriers, the potential multiplexing gains are correspondingly smaller.

In this example,

all three setups give the same sum-throughput, however, the throughput per terminal is vastly different.

More Demanding Massive MIMO Trials Using the Bristol Testbed

Last year, the 128-antenna Massive MIMO testbed at University of Bristol was used to set world records in per-cell spectral efficiency. Those measurements were conducted in a controlled indoor environment, but demonstrated that the theoretical gains of the technology are also practically achievable—at least in simple propagation scenarios.

The Bristol team has now worked with British Telecom and conducted trials at their site in Adastral Park, Suffolk, in more demanding user scenarios. In the indoor exhibition hall trial,  24 user streams were multiplexed over a 20 MHz bandwidth, resulting in a sum rate of 2 Gbit/s or a spectral efficiency of 100 bit/s/Hz/cell.

Several outdoor experiments were also conducted, which included user mobility. We are looking forward to see more details on these experiments, but in the meantime one can have a look at the following video:

Update: We have corrected the bandwidth number in this post.

Improving the Cell-Edge Performance

The cellular network that my smartphone connects to normally delivers 10-40 Mbit/s. That is sufficient for video-streaming and other applications that I might use. Unfortunately, I sometimes have poor coverage and then I can barely download emails or make a phone call. That is why I think that providing ubiquitous data coverage is the most important goal for 5G cellular networks. It might also be the most challenging 5G goal, because the area coverage has been an open problem since the first generation of cellular technology.

It is the physics that make it difficult to provide good coverage. The transmitted signals spread out and only a tiny fraction of the transmitted power reaches the receive antenna (e.g., one part of a billion parts). In cellular networks, the received signal power reduces roughly as the propagation distance to the power of four. This results in the following data rate coverage behavior:

Figure 1: Variations in the downlink data rates in an area covered by nine base stations.

This figure considers an area covered by nine base stations, which are located at the middle of the nine peaks. Users that are close to one of the base stations receive the maximum downlink data rate, which in this case is 60 Mbit/s (e.g., spectral efficiency 6 bit/s/Hz over a 10 MHz channel). As a user moves away from a base station, the data rate drops rapidly. At the cell edge, where the user is equally distant from multiple base stations, the rate is nearly zero in this simulation. This is because the received signal power is low as compared to the receiver noise.

What can be done to improve the coverage?

One possibility is to increase the transmit power. This is mathematically equivalent to densifying the network, so that the area covered by each base station is smaller. The figure below shows what happens if we use 100 times more transmit power:

Figure 2: The transmit powers have been increased 100 times as compared to Figure 1.

There are some visible differences as compared to Figure 1. First, the region around the base station that gives 60 Mbit/s is larger. Second, the data rates at the cell edge are slightly improved, but there are still large variations within the area. However, it is no longer the noise that limits the cell-edge rates—it is the interference from other base stations.

The inter-cell interference remains even if we would further increase the transmit power. The reason is that the desired signal power as well as the interfering signal power grow in the same manner at the cell edge. Similar things happen if we densify the network by adding more base stations, as nicely explained in a recent paper by Andrews et al.

Ideally, we would like to increase only the power of the desired signals, while keeping the interference power fixed. This is what transmit precoding from a multi-antenna array can achieve; the transmitted signals from the multiple antennas at the base station add constructively only at the spatial location of the desired user. More precisely, the signal power is proportional to M (the number of antennas), while the interference power caused to other users is independent of M. The following figure shows the data rates when we go from 1 to 100 antennas:

Figure 3: The number of base station antennas has been increased from 1 (as in Figure 1) to 100.

Figure 3 shows that the data rates are increased for all users, but particularly for those at the cell edge. In this simulation, everyone is now guaranteed a minimum data rate of 30 Mbit/s, while 60 Mbit/s is delivered in a large fraction of the coverage area.

In practice, the propagation losses are not only distant-dependent, but also affected by other large-scale effects, such as shadowing. The properties described above remain nevertheless. Coherent precoding from a base station with many antennas can greatly improve the data rates for the cell edge users, since only the desired signal power (and not the interference power) is increased. Higher transmit power or smaller cells will only lead to an interference-limited regime where the cell-edge performance remains to be poor. A practical challenge with coherent precoding is that the base station needs to learn the user channels, but reciprocity-based Massive MIMO provides a scalable solution to that. That is why Massive MIMO is the key technology for delivering ubiquitous connectivity in 5G.

Field Tests of FDD Massive MIMO

Frequency-division duplex (FDD) operation of Massive MIMO in LTE is the topic of two press releases from January 2017. The first press release describes a joint field test carried out by ZTE and China Telecom. It claims three-fold improvements in per-cell spectral efficiency using standard LTE devices, but no further details are given. The second press release describes a field verification carried out by Huawei and China Unicom. The average data rate was 87 Mbit/s per user over a 20 MHz channel and was achieved using commercial LTE devices. This corresponds to a spectral efficiency of 4.36 bit/s/Hz per user. A sum rate of 697 Mbit/s is also mentioned, from which one could guess that eight users were multiplexed (87•8=696).

Image source: Huawei

There are no specific details of the experimental setup or implementation in any of these press releases, so we cannot tell how well the systems perform compared to a baseline TDD Massive MIMO setup. Maybe this is just a rebranding of the FDD multiuser MIMO functionality in LTE, evolved with a few extra antenna ports. It is nonetheless exciting to see that several major telecom companies want to associate themselves with the Massive MIMO technology and hopefully it will result in something revolutionary in the years to come.

Efficient FDD implementation of multiuser MIMO is a longstanding challenge. The reason is the difficulty in estimating channels and feeding back accurate channel state information (CSI) in a resource-efficient manner. Many researchers have proposed methods to exploit channel parameterizations, such as angles and spatial correlation, to simplify the CSI acquisition. This might be sufficient to achieve an array gain, but the ability to also mitigate interuser interference is less certain and remains to be demonstrated experimentally. Since 85% of the LTE networks use FDD, we have previously claimed that making Massive MIMO work well in FDD is critical for the practical success and adoption of the technology.

We hope to see more field trials of Massive MIMO in FDD, along with details of the measurement setups and evaluations of which channel acquisition schemes that are suitable in practice. Will FDD Massive MIMO be exclusive for static users, whose channels are easily estimated, or can anyone benefit from it in 5G?

Update: Blue Danube Systems has released a press release that is also describing trials of FDD Massive MIMO as well. Many companies apparently want to be “first” with this technology for LTE.