There has been a long-standing debate on the relative performance between reciprocity-based (TDD) Massive MIMO and that of FDD solutions based on grid-of-beams, or hybrid beamforming architectures. The matter was, for example, the subject of a heated debate in the 2015 Globecom industry panel “Massive MIMO vs FD-MIMO: Defining the next generation of MIMO in 5G” where on the one hand, the commercial arguments for grid-of-beams solutions were clear, but on the other hand, their real potential for high-performance spatial multiplexing was strongly contested.
While it is known that grid-of-beams solutions perform poorly in isotropic scattering, no prior experimental results are known. This new paper:
answers this performance question through the analysis of real Massive MIMO channel measurement data obtained at the 2.6 GHz band. Except for in certain line-of-sight (LOS) environments, the original reciprocity-based TDD Massive MIMO represents the only effective implementation of Massive MIMO at the frequency bands under consideration.
In January this year, the IEEE Signal Processing Magazine contained an article by Erik G. Larsson, Danyo Danev, Mikael Olofsson, and Simon Sörman on “Teaching the Principles of Massive MIMO: Exploring reciprocity-based multiuser MIMO beamforming using acoustic waves“. It describes an exciting approach to teach the basics of Massive MIMO communication by implementing the system acoustically, using loudspeaker elements instead of antennas. The fifth-year engineering students at Linköping University have performed such implementations in 2014, 2015, and 2016, in the form of a conceive-design-implement-operate (CDIO) project.
The article details the teaching principles and experiences that the teachers and students had from the 2015 edition of the CDIO-project. This was also described in a previous blog post. In the following video, the students describe and demonstrate the end-result of the 2016 edition of the project. The acoustic testbed is now truly massive, since 64 loudspeakers were used.
What is more worth? 1 MHz bandwidth at 100 MHz carrier frequency, or 10 MHz bandwidth at 1 GHz carrier? Conventional wisdom has it that higher carrier frequencies are more valuable because “there is more bandwidth there”. In this post, I will explain why that is not entirely correct.
The basic presumption of TDD/reciprocity-based Massive MIMO is that all activity, comprising the transmission of uplink pilots, uplink data and downlink data, takes place inside of a coherence interval:
At fixed mobility, in meter/second, the dimensionality of the coherence interval is proportional to the wavelength, because the Doppler spread is proportional to the carrier frequency.
In a single cell, with max-min fairness power control (for uniform quality-of-service provision), the sum-throughput of Massive MIMO can be computed analytically and is given by the following formula:
In this formula,
= bandwidth in Hertz (split equally between uplink and downlink)
= number of base station antennas
= number of multiplexed terminals
= coherence bandwidth in Hertz (independent of carrier frequency)
= coherence time in seconds (inversely proportional to carrier frequency)
SNR = signal-to-noise ratio (“normalized transmit power”)
= path loss for the k:th terminal
= constant, close to with sufficient pilot power
This formula assumes independent Rayleigh fading, but the general conclusions remain under other models.
The factor that pre-multiplies the logarithm depends on .
The pre-log factor is maximized when . The maximal value is , which is proportional to , and therefore proportional to the wavelength. Due to the multiplication $B T_c$, one can get same pre-log factor using a smaller bandwidth by instead increasing the wavelength, i.e., reducing the carrier frequency. At the same time, assuming appropriate scaling of the number of antennas, , with the number of terminals, , the quantity inside of the logarithm is a constant.
Concluding, the sum spectral efficiency (in b/s/Hz) easily can double for every doubling of the wavelength: a megahertz of bandwidth at 100 MHz carrier is ten times more worth than a megahertz of bandwidth at a 1 GHz carrier. So while there is more bandwidth available at higher carriers, the potential multiplexing gains are correspondingly smaller.
In this example,
all three setups give the same sum-throughput, however, the throughput per terminal is vastly different.
Last year, the 128-antenna Massive MIMO testbed at University of Bristol was used to set world records in per-cell spectral efficiency. Those measurements were conducted in a controlled indoor environment, but demonstrated that the theoretical gains of the technology are also practically achievable—at least in simple propagation scenarios.
The Bristol team has now worked with British Telecom and conducted trials at their site in Adastral Park, Suffolk, in more demanding user scenarios. In the indoor exhibition hall trial, 24 user streams were multiplexed over a 20 MHz bandwidth, resulting in a sum rate of 2 Gbit/s or a spectral efficiency of 100 bit/s/Hz/cell.
Several outdoor experiments were also conducted, which included user mobility. We are looking forward to see more details on these experiments, but in the meantime one can have a look at the following video:
Update: We have corrected the bandwidth number in this post.
The Mobile World Congress (MWC) was held in Barcelona last week. Several major telecom companies took the opportunity to showcase and describe their pre-5G solutions based on Massive MIMO technology.
Huawei and Optus carried out an infield trial on February 26, where a sum rate of 655 Mbit/s was obtained over a 20 MHz channel by spatial multiplexing of 16 users. This corresponds to 33 bit/s/Hz or 2 bit/s/Hz/user, which are typical spectral efficiencies to expect from Massive MIMO. The base station was equipped with 128 antenna ports, but the press release provides no details on whether uplink or downlink transmission was considered.
Nokia and Sprint demonstrated TDD-based Massive MIMO technology for LTE networks, using 64 antenna ports at the base station. Spatial multiplexing of eight commercial LTE terminals was considered. Communication theory predicts that the sum rate should grow proportionally to the number of terminals, which is consistent with the 8x improvement in uplink rates and 5x improvement in downlink rates that were reported. Further details are found in their press release or in the following video:
Signal processing is at the core of the emerging fifth generation (5G) cellular communication systems, which will bring revolutionary changes to the physical layer. Unlike other 5G events, the objective of this summer school is to teach the main physical-layer techniques for 5G from a signal-processing perspective. The lectures will provide a background on the 5G wireless communication concepts and their formulation from a signal processing perspective. Emphasis will be placed on showing specifically how cutting-edge signal processing techniques can and will be applied to 5G. Keynote speeches by leading researchers from Ericsson, Huawei, China Mobile, and Volvo complement the technical lectures.
The summer school covers the following specific topics:
Massive MIMO communication in TDD and FDD
mmWave communications and compressed sensing
Wireless access for massive machine-type communications
The school takes place in Gothenburg, Sweden, from May 29th to June 1st, in the week after ICC in Paris.
This event belongs to the successful series of IEEE SPS and EURASIP Seasonal Schools in Signal Processing. The 2017 edition is jointly organized by Chalmers University of Technology, Linköping University, The University of Texas at Austin, Aalborg University and the University of Vigo.
Registration is now open. A limited number of student travel grants will be available.
The cellular network that my smartphone connects to normally delivers 10-40 Mbit/s. That is sufficient for video-streaming and other applications that I might use. Unfortunately, I sometimes have poor coverage and then I can barely download emails or make a phone call. That is why I think that providing ubiquitous data coverage is the most important goal for 5G cellular networks. It might also be the most challenging 5G goal, because the area coverage has been an open problem since the first generation of cellular technology.
It is the physics that make it difficult to provide good coverage. The transmitted signals spread out and only a tiny fraction of the transmitted power reaches the receive antenna (e.g., one part of a billion parts). In cellular networks, the received signal power reduces roughly as the propagation distance to the power of four. This results in the following data rate coverage behavior:
This figure considers an area covered by nine base stations, which are located at the middle of the nine peaks. Users that are close to one of the base stations receive the maximum downlink data rate, which in this case is 60 Mbit/s (e.g., spectral efficiency 6 bit/s/Hz over a 10 MHz channel). As a user moves away from a base station, the data rate drops rapidly. At the cell edge, where the user is equally distant from multiple base stations, the rate is nearly zero in this simulation. This is because the received signal power is low as compared to the receiver noise.
What can be done to improve the coverage?
One possibility is to increase the transmit power. This is mathematically equivalent to densifying the network, so that the area covered by each base station is smaller. The figure below shows what happens if we use 100 times more transmit power:
There are some visible differences as compared to Figure 1. First, the region around the base station that gives 60 Mbit/s is larger. Second, the data rates at the cell edge are slightly improved, but there are still large variations within the area. However, it is no longer the noise that limits the cell-edge rates—it is the interference from other base stations.
Ideally, we would like to increase only the power of the desired signals, while keeping the interference power fixed. This is what transmit precoding from a multi-antenna array can achieve; the transmitted signals from the multiple antennas at the base station add constructively only at the spatial location of the desired user. More precisely, the signal power is proportional to M (the number of antennas), while the interference power caused to other users is independent of M. The following figure shows the data rates when we go from 1 to 100 antennas:
Figure 3 shows that the data rates are increased for all users, but particularly for those at the cell edge. In this simulation, everyone is now guaranteed a minimum data rate of 30 Mbit/s, while 60 Mbit/s is delivered in a large fraction of the coverage area.
In practice, the propagation losses are not only distant-dependent, but also affected by other large-scale effects, such as shadowing. The properties described above remain nevertheless. Coherent precoding from a base station with many antennas can greatly improve the data rates for the cell edge users, since only the desired signal power (and not the interference power) is increased. Higher transmit power or smaller cells will only lead to an interference-limited regime where the cell-edge performance remains to be poor. A practical challenge with coherent precoding is that the base station needs to learn the user channels, but reciprocity-based Massive MIMO provides a scalable solution to that. That is why Massive MIMO is the key technology for delivering ubiquitous connectivity in 5G.