All posts by Emil Björnson

Open Science and Massive MIMO

Open science is just science done right” is a quote from Prof. Jon Tennant in a recent podcast. He is referring to the movement away from the conventionally closed science community where you need to pay to gain access to research results and everyone treats data and simulation code as confidential. Since many funding agencies are requiring open access publishing and open data nowadays, we are definitely moving in the open science direction. But different research fields are at different positions on the scale between fully open and entirely closed science. The machine learning community has embraced open science to a large extent, maybe because the research requires common data sets. When the Nature Machine Intelligence journal was founded, more 3000 researchers signed a petition against its closed access and author fees and promised to not publish in that journal. However, research fields that for decades have been dominated by a few high-impact journals (such as Nature) have not reached as far.

IEEE is the main publisher of Massive MIMO research and has, fortunately, been quite liberal in terms of allowing for parallel publishing. At the time of writing this blog post, the IEEE policy is that an author is allowed to upload the accepted version of their paper on the personal website, the author’s employer’s website, and on arXiv.org. It is more questionable if it is allowed to upload papers in other popular repositories such as ResearchGate – can the ResearchGate profile pages count as personal websites?

It is we as researchers that need to take the steps towards open science. The publishers will only help us under the constraint that they can sustain their profits. For example, IEEE Access was created to have an open access alternative to the traditional IEEE journals, but its quality is no better than non-IEEE journals that have offered open access for a long time. I have published several papers in IEEE Access and although I’m sure that these papers are of good quality, I’ve been quite embarrassed by the poor review processes.

Personally, I try to make all my papers available on arXiv.org and also publish simulation code and data on my GitHub whenever I can, in an effort to support research reproducibility. My reasons for doing this are explained in the following video:

Outdoor Massive MIMO Demonstrations in Bristol

The University of Bristol continues to be one of the driving forces in demonstrating reciprocity-based Massive MIMO in time-division duplex. The two videos below are from an outdoor demo that was carried out in Bristol in March 2018.  A 128-antenna testbed with a rectangular array of 4 rows and 32 single-polarized antennas per row were used. The demo was carried out with a carrier frequency of 3.5 GHz and featured spatial multiplexing of video streaming to 12 users.

Prof. Mark Beach, who is leading the effort, believes that Massive MIMO in sub-6 GHz bands will be the key technology for serving the users in hotspots and sport arenas. Interestingly, Prof. Beach is also an author of one of the first paper on multiuser MIMO from 1990: “The performance enhancement of multibeam adaptive base-station antennas for cellular land mobile radio systems“.

When Are Downlink Pilots Needed?

Pilots are predefined reference signals that are transmitted to let the receiver estimate the channel. While many communication systems have pilot transmissions in both uplink and downlink, the canonical communication protocol in Massive MIMO only contains uplink pilots. In this blog post, I will explain when downlink pilots are needed and why we can omit them in Massive MIMO.

Consider the communication link between a single-antenna user and an M-antenna base station (BS). The channel vector $\mathbf{h} \in \mathbb{C}^M$ varies over time and frequency in a way that is often modeled as random fading. In each channel coherence blocks, the BS selects a precoding vector $\mathbf{w} \in \mathbb{C}^M$ and uses it for downlink transmission. The precoding reduces the multiantenna vector channel to an effective single-antenna scalar channel

    $$g = \mathbf{h}^{T} \mathbf{w}.$$

The receiving user does not need to know the full M-dimensional vectors $\mathbf{h}$ and $\mathbf{w}$. However, to decode the downlink data in a successful way, it needs to learn the complex scalar channel $g$. The difficulty in learning $g$ depends strongly on the mechanism of precoding selection. Two examples are considered below.

Codebook-based precoding

In this case, the BS tries out a set of different precoding vectors from a codebook (e.g., a grid of beams, as shown to the right) by sending one downlink pilot signal through each one of them. The user measures $g$ for each one of them and feeds back the index of the one that maximizes the channel gain |g|. The BS will then transmit data using that precoding vector. During the data transmission, $g \in \mathbb{C}$ can have any phase, but the user already knows the phase and can compensate for it in the decoding algorithm.

If multiple users are spatially multiplexed in the downlink, the BS might use another precoding vector than the one selected by the user. For example, regularized zero-forcing might be used to suppress interference. In that case, the magnitude |g| of the channel changes, but the phase remains the same. If phase-shift keying (PSK) is used for communication, such that no information is encoded in the signal amplitude, no estimation of |g| is needed for decoding (but it can help to reduce the error probability). If quadrature amplitude modulation (QAM) is used instead, the user needs to learn also |g| to decode the data. The unknown magnitude can be estimated blindly based on the received signals. Hence, no further pilot transmission is needed.

Reciprocity-based precoding

In this case, the user transmits a pilot signal in the uplink, which enables the BS to directly estimate the entire channel vector $\mathbf{h}$. For the sake of argument, suppose this estimation is perfect and that maximum ratio transmission with $\mathbf{w}=\mathbf{h}^*/\| \mathbf{h} \|$ is used for downlink data transmission. The effective channel gain will then be

    $$g = \mathbf{h}^{T} \frac{\mathbf{h}^*}{\| \mathbf{h} \|} = \| \mathbf{h} \|,$$

which is a positive scalar. Hence, the user only needs to learn the magnitude of  $g$ because the phase is always zero. Estimation of  |g| can be implemented without downlink pilots, either by relying on channel hardening or by blind estimation based on the received signals. The former only works well in Massive MIMO with very many antennas, while the latter can be done in any system (including codebook-based precoding).

Conclusion

We generally need to compensate for the channel’s phase-shift at some place in a wireless system. In codebook-based precoding, the compensation is done at the user-side, based on the received signals from the downlink pilots. This is the main approach in 4G systems, which is why downlink pilots are so commonly used. In contrast, when using reciprocity-based precoding, the phase-shifts are compensated for at the BS-side using the uplink pilots. In either case, explicit pilot signals are only needed in one direction: uplink or downlink. If the estimation is imperfect, there will be some remaining phase ambiguity, which can be estimated blindly since we know that it is small (i.e., of all possible phase-rotations that could have resulted in the received signal, the smallest one is most likely).

When we have access to TDD spectrum, we can choose between the two precoding methods mentioned above. The reciprocity-based approach is preferable in terms of less overhead signaling; one pilot per user instead of one per index in the codebook (the codebook size needs to grow with the number of antennas), and no feedback is needed. That is why this approach is taken in the canonical form of Massive MIMO.

Joint Massive MIMO Deployment for LTE and 5G

The American telecom operator Sprint is keen on mentioning Massive MIMO in the marketing of its 5G network deployments, as I wrote about a year ago. You can find their new video below and it gives new insights into the deployment strategy of their new 64-antenna BSs. Initially, the base station will be divided between LTE and 5G operation. According to CTO Dr. John Saw, the left half of the array will be used for LTE and the right half for 5G. This will lead to a 3 dB loss in SNR and also a reduced multiplexing capability, but I suppose that Sprint is only doing this temporarily until the number of 5G users is sufficiently large to motivate a 5G-only base station. Another thing that one can infer from the video is that the LTE/5G splitting is software-defined so physical changes to the base station hardware are not needed to change it.

5G Dilemma: Higher throughput but also higher energy consumption

I was recently interviewed by IEEE Spectrum for the article: The 5G Dilemma: More Base Stations, More Antennas—Less Energy?

Since 5G is being built in addition to the existing cellular networks, the energy consumption of the cellular network infrastructure as a whole will certainly increase when 5G is introduced. It is too early to say how much the energy consumption will grow, but even if the implementation would be vastly more energy efficient than before, we need to spend more energy to gain more in network capacity.

It is important to keep in mind that having a high energy consumption is not necessarily a problem. The real issue is that the power plants that power the cellular networks are mainly extracting energy from non-renewable sources that have a negative impact on the environment. It is the same issue that electric cars have – these are only environmentally friendly if they are charged with energy from environmentally friendly power plants. Hence, we need to keep the energy consumption of cellular networks down until cleaner power plants are widely used.

If you want to learn more about energy efficiency after reading the article in IEEE Spectrum, I recommend the following overview video (you find all the technical details in Section 5 in my book Massive MIMO networks):

Are Pilots and Data Transmitted With the Same Power?

The user terminals in reciprocity-based Massive MIMO transmit two types of uplink signals: data and pilots (a.k.a. reference signals). A terminal can potentially transmit these signals using different power levels. In the book Fundamentals of Massive MIMO, the pilots are always sent with maximum power, while the data is sent with a user-specific power level that is optimized to deliver a certain per-user performance. In the book Massive MIMO networks, the uplink power levels are also optimized, but under another assumption: each user must assign the same power to pilots and data.

Moreover, there is a series of research papers (e.g., Ref1, Ref2, Ref3) that treat the pilot and data powers as two separate optimization variables that can be optimized with respect to some performance metric, under a constraint on the total energy budget per transmission/coherence block. This gives the flexibility to “move” power from data to pilots for users at the cell edge, to improve the channel state information that the base station acquires and thereby the array gain that it obtains when decoding the data signals received over the antennas.

In some cases, it is theoretically preferable to assign, for example, 20 dB higher power to pilots than to data. But does that make practical sense, bearing in mind that non-linear amplifiers are used and the peak-to-average-power ratio (PAPR) is then a concern? The answer depends on how the pilots and data are allocated over the time-frequency grid. In OFDM systems, which have an inherently high PAPR, it is discouraged to have large power differences between OFDM symbols (i.e., consecutive symbols in the time domain) since this will further increase the PAPR. However, it is perfectly fine to assign the power in an unequal manner over the subcarriers.

In the OFDM literature, there are two elementary ways to allocate pilots: block and comb type arrangements. These are illustrated in the figure below and some early references on the topic are Ref4, Ref5, Ref6.

(a): In the block type arrangement, at a given OFDM symbol time, all subcarriers either contain pilots or data. It is then preferable for a user terminal to use the same transmit power for pilots and data, to not get a prohibitively high PAPR. This is consistent with the assumptions made in the book Massive MIMO networks.

(b): In the comb type arrangement, some subcarriers always contain pilots and other subcarriers always contain data. It is then possible to assign different power to pilots and data at a user terminal. The power can be moved from pilot subcarriers to data subcarriers or vice versa, without a major impact on the PAPR. This approach enables the type of unequal pilot and data power allocations considered in Fundamentals of Massive MIMO or research papers that optimize the pilot and data powers under a total energy budget per coherence block.

The downlink in LTE uses a variation of the two elementary pilot arrangements, as illustrated in (c). It is easiest described as a comb type arrangement where some pilots are omitted and replaced with data. The number of omitted pilots depend on how many antenna ports are used; the more antennas, the more similar the pilot arrangement becomes to the comb type. Hence, unequal pilot and power allocation is possible in LTE but maybe not as easy to implement as described above. 5G has a more flexible frame structure but supports the same arrangements as LTE.

In summary, uplink pilots and data can be transmitted at different power levels, and this flexibility can be utilized to improve the performance in Massive MIMO. It does, however, require that the pilots are arranged in practically suitable ways, such as the comb type arrangement.

Pilot Contamination is Not Captured by the MSE

Pilot contamination used to be seen as the key issue with the Massive MIMO technology, but thanks to a large number of scientific papers we now know fairly well how to deal with it. I outlined the main approaches to mitigate pilot contamination in a previous blog post and since then the paper Massive MIMO has unlimited capacity has also been picked up by science news channels.

When reading papers on pilot (de)contamination written by many different authors, I’ve noticed one recurrent issue: the mean-squared error (MSE) is used to measure the level of pilot contamination. A few papers only plot the MSE, while most papers contain multiple MSE plots and then one or two plots with bit-error-rates or achievable rates. As I will explain below, the MSE is a rather poor measure of pilot contamination since it cannot distinguish between noise and pilot contamination.

A simple example

Suppose the desired uplink signal is received with power p and is disturbed by noise with power (1-a) and interference from another user with power a. By varying the variable a between 0 and 1 in this simple example, we can study how the performance changes when moving power from the noise to the interference, and vice versa.

By following the standard approach for channel estimation based on uplink pilots (see Fundamentals of Massive MIMO), the MSE for i.i.d. Rayleigh fading channels is

    $$\textrm{MSE} = \frac{p}{p+(1-a)+a} = \frac{p}{p+1}, $$

which is independent of a and, hence, does not care about whether the disturbance comes from noise or interference. This is rather intuitive since both the noise and interference are additive i.i.d. Gaussian random variables in this example. The important difference appears in the data transmission phase, where the noise takes a new independent realization and the interference is strongly correlated with the interference in the pilot phase, because it is the product of a new scalar signal and the same channel vector.

To demonstrate the important difference, suppose maximum ratio combining is used to detect the uplink data. The effective uplink signal-to-interference-and-noise-ratio (SINR) is

    $$\textrm{SINR} = \frac{p(1-\textrm{MSE}) M}{p+1+a M \cdot \textrm{MSE}}$$

where M is the number of antennas. For any given MSE value, it now matters how it was generated, because the SINR is a decreasing function of a. The term a M  \cdot \textrm{MSE} is due to pilot contamination (it is often called coherent interference) and is proportional to the interference power $a$. When the number of antennas is large, it is far better to have more noise during the pilot transmission than more interference!

Implications

Since the MSE cannot separate noise from interference, we should not try to measure the effectiveness of a “pilot decontamination” algorithm by considering the MSE. An algorithm that achieves a low MSE can potentially be mitigating the noise, leaving the interference unaffected. If that is the case, the pilot contamination term $a M  \cdot \textrm{MSE}$ will remain. The MSE has been used far too often when evaluating pilot decontamination algorithms, and a few papers (I found three while writing this post) did only consider the MSE, which opens the door for questioning their conclusions.

The right methodology is to compute the SINR (or some other performance indicator in the data phase) with the proposed pilot decontamination algorithm and with competing algorithms. In that case, we can be sure that the full impact of the pilot contamination is taken into account.