Adaptive beamforming for wireless communications has a long history, with the modern research dating back to the 70s and 80s. There is even a paper from 1919 that describes the development of directive transatlantic communication practices that were developed during the First World War. Many of the beamforming methods that are considered today can be found already in the magazine paper Beamforming: A Versatile Approach to Spatial Filtering from 1988. Plenty of further work was carried out in the 90s and 00s, before the Massive MIMO paradigm.
I think it is fair to say that no fundamentally new beamforming methods have been developed in the Massive MIMO literature, but we have rather taken known methods and generalized them to take imperfect channel state information and other practical aspects into account. And then we have developed rigorous ways to quantify the achievable rates that these beamforming methods achieve and studied the asymptotic behaviors when having many antennas. Closed-form expressions are available in some special cases, while Monte Carlo simulations can be used to compute these expressions in other cases.
As beamforming has evolved from an analog phased-array concept, where angular beams are studied, to a digital concept where the beamforming is represented in multi-dimensional vector spaces, it easy to forget the basic properties of array processing. That is why we dedicated Section 7.4 in Massive MIMO Networks to describe how the physical beam width and spatial resolution depend on the array geometry.
In particular, I’ve observed a lot of confusion about the dimensionality of MIMO arrays, which are probably rooted in the confusion around the difference between an antenna (which is something connected to an RF chain) and a radiating element. I explained this in detail in a previous blog post and then exemplified it based on a recent press release. I have also recorded the following video to visually explain these basic properties:
A recent white paper from Ericsson is also providing a good description of these concepts, particularly focused on how an array with a given geometry can be implemented with different numbers of RF chains (i.e., different numbers of antennas) depending on the deployment scenario. While having as many antennas as radiating element is preferable from a performance perspective, but the Ericsson researchers are arguing that one can get away with fewer antennas in the vertical direction in deployments where it is anyway very hard to resolve users in the elevation dimension.
The tedious, time-consuming, and buggy nature of system-level simulations is exacerbated with massive MIMO. This post offers some relieve in the form of analytical expressions for downlink conjugate beamforming . These expressions enable the testing and calibration of simulators—say to determine how many cells are needed to represent an infinitely large network with some desired accuracy. The trick that makes the analysis feasible is to let the shadowing grow strong, yet the ensuing expressions capture very well the behaviors with practical shadowings.
The setting is an infinitely large cellular network where each -antenna base station (BS) serves single-antenna users. The large-scale channel gains include pathloss with exponent and shadowing having log-scale standard deviation , with the gain between the th BS and the th user served by a BS of interest denoted by .With conjugate beamforming and receivers reliant on channel hardening, the signal-to-interference ratio (SIR) at such user is 
where is the gain from the serving BS and is the share of that BS’s power allocated to user . Two power allocations can be analyzed:
SIR-equalizing : , with the proportionality constant ensuring that . This makes . Moreover, as and grow large,
The analysis is conducted for , which makes it valid for arbitrary BS locations.
For notational compactness, let . Define as the solution to where is the lower incomplete gamma function. For , in particular, . Under a uniform power allocation, the CDF of is available in an explicit form involving the Gauss hypergeometric function (available in MATLAB and Mathematica):
where “” indicates asymptotic () equality, is such that the CDF is continuous, and
Alternatively, the CDF can be obtained by solving (e.g., with Mathematica) a single integral involving the Kummer function :
This latter solution can be modified for the SIR-equalizing power allocation as
The spectral efficiency of user is with CDF readily characterizable from the expressions given earlier. From , the sum spectral efficiency at the BS of interest can be found as Expressions for the averages and are further available in the form of single integrals.
With a uniform power allocation,
and . For the special case of , the Kummer function simplifies giving
With an equal-SIR power allocation
Application to Relevant Networks
Let us now contrast the analytical expressions (computable instantaneously and exactly, and valid for arbitrary topologies, but asymptotic in the shadowing strength) with some Monte-Carlo simulations (lengthy, noisy, and bug-prone, but for precise shadowing strengths and topologies).
First, we simulate a 500-cell hexagonal lattice with , and . Figs. 1a-1b compare the simulations for – dB with the analysis. The behaviors with these typical outdoor values of are well represented by the analysis and, as it turns out, in rigidly homogeneous networks such as this one is where the gap is largest.
For a more irregular deployment, let us next consider a network whose BSs are uniformly distributed. BSs (500 on average) are dropped around a central one of interest. For each network snapshot, users are then uniformly dropped until of them are served by the central BS. As before, , and . Figs. 2a-2b compare the simulations for dB with the analysis, and the agreement is now complete. The simulated average spectral efficiency with a uniform power allocation is b/s/Hz/user while (2) gives b/s/Hz/user.
The analysis presented in this post is not without limitations, chiefly the absence of noise and pilot contamination. However, as argued in , there is a broad operating range (– with very conservative premises) where these effects are rather minor, and the analysis is hence applicable.
Pilots are predefined reference signals that are transmitted to let the receiver estimate the channel. While many communication systems have pilot transmissions in both uplink and downlink, the canonical communication protocol in Massive MIMO only contains uplink pilots. In this blog post, I will explain when downlink pilots are needed and why we can omit them in Massive MIMO.
Consider the communication link between a single-antenna user and an -antenna base station (BS). The channel vector varies over time and frequency in a way that is often modeled as random fading. In each channel coherence blocks, the BS selects a precoding vector and uses it for downlink transmission. The precoding reduces the multiantenna vector channel to an effective single-antenna scalar channel
The receiving user does not need to know the full -dimensional vectors and . However, to decode the downlink data in a successful way, it needs to learn the complex scalar channel . The difficulty in learning depends strongly on the mechanism of precoding selection. Two examples are considered below.
In this case, the BS tries out a set of different precoding vectors from a codebook (e.g., a grid of beams, as shown to the right) by sending one downlink pilot signal through each one of them. The user measures for each one of them and feeds back the index of the one that maximizes the channel gain . The BS will then transmit data using that precoding vector. During the data transmission, can have any phase, but the user already knows the phase and can compensate for it in the decoding algorithm.
If multiple users are spatially multiplexed in the downlink, the BS might use another precoding vector than the one selected by the user. For example, regularized zero-forcing might be used to suppress interference. In that case, the magnitude of the channel changes, but the phase remains the same. If phase-shift keying (PSK) is used for communication, such that no information is encoded in the signal amplitude, no estimation of is needed for decoding (but it can help to reduce the error probability). If quadrature amplitude modulation (QAM) is used instead, the user needs to learn also to decode the data. The unknown magnitude can be estimated blindly based on the received signals. Hence, no further pilot transmission is needed.
In this case, the user transmits a pilot signal in the uplink, which enables the BS to directly estimate the entire channel vector . For the sake of argument, suppose this estimation is perfect and that maximum ratio transmission with is used for downlink data transmission. The effective channel gain will then be
which is a positive scalar. Hence, the user only needs to learn the magnitude of because the phase is always zero. Estimation of can be implemented without downlink pilots, either by relying on channel hardening or by blind estimation based on the received signals. The former only works well in Massive MIMO with very many antennas, while the latter can be done in any system (including codebook-based precoding).
We generally need to compensate for the channel’s phase-shift at some place in a wireless system. In codebook-based precoding, the compensation is done at the user-side, based on the received signals from the downlink pilots. This is the main approach in 4G systems, which is why downlink pilots are so commonly used. In contrast, when using reciprocity-based precoding, the phase-shifts are compensated for at the BS-side using the uplink pilots. In either case, explicit pilot signals are only needed in one direction: uplink or downlink. If the estimation is imperfect, there will be some remaining phase ambiguity, which can be estimated blindly since we know that it is small (i.e., of all possible phase-rotations that could have resulted in the received signal, the smallest one is most likely).
When we have access to TDD spectrum, we can choose between the two precoding methods mentioned above. The reciprocity-based approach is preferable in terms of less overhead signaling; one pilot per user instead of one per index in the codebook (the codebook size needs to grow with the number of antennas), and no feedback is needed. That is why this approach is taken in the canonical form of Massive MIMO.
The user terminals in reciprocity-based Massive MIMO transmit two types of uplink signals: data and pilots (a.k.a. reference signals). A terminal can potentially transmit these signals using different power levels. In the book Fundamentals of Massive MIMO, the pilots are always sent with maximum power, while the data is sent with a user-specific power level that is optimized to deliver a certain per-user performance. In the book Massive MIMO networks, the uplink power levels are also optimized, but under another assumption: each user must assign the same power to pilots and data.
Moreover, there is a series of research papers (e.g., Ref1, Ref2, Ref3) that treat the pilot and data powers as two separate optimization variables that can be optimized with respect to some performance metric, under a constraint on the total energy budget per transmission/coherence block. This gives the flexibility to “move” power from data to pilots for users at the cell edge, to improve the channel state information that the base station acquires and thereby the array gain that it obtains when decoding the data signals received over the antennas.
In some cases, it is theoretically preferable to assign, for example, 20 dB higher power to pilots than to data. But does that make practical sense, bearing in mind that non-linear amplifiers are used and the peak-to-average-power ratio (PAPR) is then a concern? The answer depends on how the pilots and data are allocated over the time-frequency grid. In OFDM systems, which have an inherently high PAPR, it is discouraged to have large power differences between OFDM symbols (i.e., consecutive symbols in the time domain) since this will further increase the PAPR. However, it is perfectly fine to assign the power in an unequal manner over the subcarriers.
In the OFDM literature, there are two elementary ways to allocate pilots: block and comb type arrangements. These are illustrated in the figure below and some early references on the topic are Ref4, Ref5, Ref6.
(a): In the block type arrangement, at a given OFDM symbol time, all subcarriers either contain pilots or data. It is then preferable for a user terminal to use the same transmit power for pilots and data, to not get a prohibitively high PAPR. This is consistent with the assumptions made in the book Massive MIMO networks.
(b): In the comb type arrangement, some subcarriers always contain pilots and other subcarriers always contain data. It is then possible to assign different power to pilots and data at a user terminal. The power can be moved from pilot subcarriers to data subcarriers or vice versa, without a major impact on the PAPR. This approach enables the type of unequal pilot and data power allocations considered in Fundamentals of Massive MIMO or research papers that optimize the pilot and data powers under a total energy budget per coherence block.
The downlink in LTE uses a variation of the two elementary pilot arrangements, as illustrated in (c). It is easiest described as a comb type arrangement where some pilots are omitted and replaced with data. The number of omitted pilots depend on how many antenna ports are used; the more antennas, the more similar the pilot arrangement becomes to the comb type. Hence, unequal pilot and power allocation is possible in LTE but maybe not as easy to implement as described above. 5G has a more flexible frame structure but supports the same arrangements as LTE.
In summary, uplink pilots and data can be transmitted at different power levels, and this flexibility can be utilized to improve the performance in Massive MIMO. It does, however, require that the pilots are arranged in practically suitable ways, such as the comb type arrangement.
Drones could shape the future of technology, especially if provided with reliable command and control (C&C) channels for safe and autonomous flying, and high-throughput links for multi-purpose live video streaming. Some months ago, Prabhu Chandhar’s guest post discussed the advantages of using massive MIMO to serve drone – or unmanned aerial vehicle (UAV) – users. More recently, our Paper 1 and Paper 2 have quantified such advantages under the realistic network conditions specified by the 3GPP. While demonstrating that massive MIMO is instrumental in enabling support for UAV users, our works also show that merely upgrading existing base stations (BS) with massive MIMO might not be enough to provide a reliable service at all UAV flying heights. Indeed, hardware solutions need to be complemented with signal processing enhancements through all communications phases, namely, 1) UAV cell selection and association, 2) downlink BS-to-UAV transmissions, and 3) uplink UAV-to-BS transmissions. These are outlined below.
1. UAV cell selection and association
As depicted in Figure 1(a), most existing cellular BSs create a fixed beampattern towards the ground. Thanks to this, ground users tend to perceive a strong signal strength from nearby BSs, which they use for connecting to the network. Instead, aerial users such as the red drone in Figure 1(a) only receive weak sidelobe-generated signals from a nearby BS when flying above it. This results in a deployment planning issue as illustrated in Figure 1(b), where due to the radiation of a strong sidelobe, the tri-sector BSs located in the origin can be the preferred server for far-flung UAVs (red spots). Consequently, these UAVs might experience strong interference, since they perceive signals from a multiplicity of BSs with similar power.
On the other hand, thanks to their capability of beamforming the synchronization signals used for user association, massive MIMO systems ensure that aerial users generally connect to a nearby BS. This optimized association enhances the robustness of the mobility procedures, as well as the downlink and uplink data phases.
2. Downlink BS-to-UAV transmissions
During the downlink data phase, UAV users are very sensitive to the strong inter-cell interference generated from a plurality of BSs, which are likely to be in line-of-sight. This may result in performance degradation, preventing UAVs from receiving critical C&C information, which has an approximate rate requirement of 60-100 kbps. Indeed, Figure 2 shows how conventional cellular networks (‘SU’) can only guarantee 100 kbps to a mere 6% of the UAVs flying at 150 meters. A conventional massive MIMO system (‘mMIMO’) enhances the data rates, albeit only 33% of the UAVs reach 100 kbps when they fly at 300 meters. This is due to a well-known effect: pilot contamination. Such an effect is particularly severe in scenarios with UAV users, since they can create strong uplink interference to many line-of-sight BSs simultaneously. In contrast, the pilot contamination decays much faster with distance for ground UEs.
In a nutshell, Figure 2 tells us that complementing conventional massive MIMO with explicit inter-cell interference suppression (‘mMIMO w/ nulls’) is essential when supporting high UAVs. In a ‘mMIMO w/ nulls’ system, BSs incorporate additional signal processing features that enable them to perform a twofold task. First, leveraging channel directionality, BSs can spatially separate non-orthogonal pilots transmitted by different UAVs. Second, by dedicating a certain number of spatial degrees of freedom to place radiation nulls, BSs can mitigate interference on the directions corresponding to users in other cells that are most vulnerable to the BS’s interference. Indeed, these additional capabilities dramatically increase the percentage of UAVs that meet the 100 kbps requirement when these are flying at 300 m, from 33% (‘mMIMO’) to a whopping 98% (‘mMIMO w/ nulls’).
3. Uplink UAV-to-BS transmissions
Unlike the downlink, where UAVs should be protected to prevent a significant performance degradation, it is the ground users who we should care about in the uplink. This is because line-of-sight UAVs can generate strong interference towards many BSs, therefore overwhelming the weaker signals transmitted by non-line-of-sight ground users. The consequences of such a phenomenon are illustrated in Figure 3, where the uplink rates of ground users plummet as the number of UAVs increases.
Again, ‘mMIMO w/nulls’ – incorporating additional space-domain inter-cell interference suppression capabilities – can solve the above issue and guarantee a better performance for legacy ground users.
Overall, the efforts towards realizing aerial wireless networks are just commencing, and massive MIMO will likely play a key role. In the exciting era of fly-and-connect, we must revisit our understanding of cellular networks and develop novel architectures and techniques, catering not only for roads and buildings, but also for the sky.
Pilot contamination used to be seen as the key issue with the Massive MIMO technology, but thanks to a large number of scientific papers we now know fairly well how to deal with it. I outlined the main approaches to mitigate pilot contamination in a previous blog post and since then the paper Massive MIMO has unlimited capacity has also been picked up by science news channels.
When reading papers on pilot (de)contamination written by many different authors, I’ve noticed one recurrent issue: the mean-squared error (MSE) is used to measure the level of pilot contamination. A few papers only plot the MSE, while most papers contain multiple MSE plots and then one or two plots with bit-error-rates or achievable rates. As I will explain below, the MSE is a rather poor measure of pilot contamination since it cannot distinguish between noise and pilot contamination.
A simple example
Suppose the desired uplink signal is received with power and is disturbed by noise with power and interference from another user with power . By varying the variable between 0 and 1 in this simple example, we can study how the performance changes when moving power from the noise to the interference, and vice versa.
By following the standard approach for channel estimation based on uplink pilots (see Fundamentals of Massive MIMO), the MSE for i.i.d. Rayleigh fading channels is
which is independent of and, hence, does not care about whether the disturbance comes from noise or interference. This is rather intuitive since both the noise and interference are additive i.i.d. Gaussian random variables in this example. The important difference appears in the data transmission phase, where the noise takes a newindependent realization and the interference is strongly correlated with the interference in the pilot phase, because it is the product of a new scalar signal and the same channel vector.
To demonstrate the important difference, suppose maximum ratio combining is used to detect the uplink data. The effective uplink signal-to-interference-and-noise-ratio (SINR) is
where is the number of antennas. For any given MSE value, it now matters how it was generated, because the SINR is a decreasing function of . The term is due to pilot contamination (it is often called coherent interference) and is proportional to the interference power . When the number of antennas is large, it is far better to have more noise during the pilot transmission than more interference!
Since the MSE cannot separate noise from interference, we should not try to measure the effectiveness of a “pilot decontamination” algorithm by considering the MSE. An algorithm that achieves a low MSE can potentially be mitigating the noise, leaving the interference unaffected. If that is the case, the pilot contamination term will remain. The MSE has been used far too often when evaluating pilot decontamination algorithms, and a few papers (I found three while writing this post) did only consider the MSE, which opens the door for questioning their conclusions.
The right methodology is to compute the SINR (or some other performance indicator in the data phase) with the proposed pilot decontamination algorithm and with competing algorithms. In that case, we can be sure that the full impact of the pilot contamination is taken into account.