The American telecom operator Sprint is keen on mentioning Massive MIMO in the marketing of its 5G network deployments, as I wrote about a year ago. You can find their new video below and it gives new insights into the deployment strategy of their new 64-antenna BSs. Initially, the base station will be divided between LTE and 5G operation. According to CTO Dr. John Saw, the left half of the array will be used for LTE and the right half for 5G. This will lead to a 3 dB loss in SNR and also a reduced multiplexing capability, but I suppose that Sprint is only doing this temporarily until the number of 5G users is sufficiently large to motivate a 5G-only base station. Another thing that one can infer from the video is that the LTE/5G splitting is software-defined so physical changes to the base station hardware are not needed to change it.
I was recently interviewed by IEEE Spectrum for the article: The 5G Dilemma: More Base Stations, More Antennas—Less Energy?
Since 5G is being built in addition to the existing cellular networks, the energy consumption of the cellular network infrastructure as a whole will certainly increase when 5G is introduced. It is too early to say how much the energy consumption will grow, but even if the implementation would be vastly more energy efficient than before, we need to spend more energy to gain more in network capacity.
It is important to keep in mind that having a high energy consumption is not necessarily a problem. The real issue is that the power plants that power the cellular networks are mainly extracting energy from non-renewable sources that have a negative impact on the environment. It is the same issue that electric cars have – these are only environmentally friendly if they are charged with energy from environmentally friendly power plants. Hence, we need to keep the energy consumption of cellular networks down until cleaner power plants are widely used.
If you want to learn more about energy efficiency after reading the article in IEEE Spectrum, I recommend the following overview video (you find all the technical details in Section 5 in my book Massive MIMO networks):
The user terminals in reciprocity-based Massive MIMO transmit two types of uplink signals: data and pilots (a.k.a. reference signals). A terminal can potentially transmit these signals using different power levels. In the book Fundamentals of Massive MIMO, the pilots are always sent with maximum power, while the data is sent with a user-specific power level that is optimized to deliver a certain per-user performance. In the book Massive MIMO networks, the uplink power levels are also optimized, but under another assumption: each user must assign the same power to pilots and data.
Moreover, there is a series of research papers (e.g., Ref1, Ref2, Ref3) that treat the pilot and data powers as two separate optimization variables that can be optimized with respect to some performance metric, under a constraint on the total energy budget per transmission/coherence block. This gives the flexibility to “move” power from data to pilots for users at the cell edge, to improve the channel state information that the base station acquires and thereby the array gain that it obtains when decoding the data signals received over the antennas.
In some cases, it is theoretically preferable to assign, for example, 20 dB higher power to pilots than to data. But does that make practical sense, bearing in mind that non-linear amplifiers are used and the peak-to-average-power ratio (PAPR) is then a concern? The answer depends on how the pilots and data are allocated over the time-frequency grid. In OFDM systems, which have an inherently high PAPR, it is discouraged to have large power differences between OFDM symbols (i.e., consecutive symbols in the time domain) since this will further increase the PAPR. However, it is perfectly fine to assign the power in an unequal manner over the subcarriers.
In the OFDM literature, there are two elementary ways to allocate pilots: block and comb type arrangements. These are illustrated in the figure below and some early references on the topic are Ref4, Ref5, Ref6.
(a): In the block type arrangement, at a given OFDM symbol time, all subcarriers either contain pilots or data. It is then preferable for a user terminal to use the same transmit power for pilots and data, to not get a prohibitively high PAPR. This is consistent with the assumptions made in the book Massive MIMO networks.
(b): In the comb type arrangement, some subcarriers always contain pilots and other subcarriers always contain data. It is then possible to assign different power to pilots and data at a user terminal. The power can be moved from pilot subcarriers to data subcarriers or vice versa, without a major impact on the PAPR. This approach enables the type of unequal pilot and data power allocations considered in Fundamentals of Massive MIMO or research papers that optimize the pilot and data powers under a total energy budget per coherence block.
The downlink in LTE uses a variation of the two elementary pilot arrangements, as illustrated in (c). It is easiest described as a comb type arrangement where some pilots are omitted and replaced with data. The number of omitted pilots depend on how many antenna ports are used; the more antennas, the more similar the pilot arrangement becomes to the comb type. Hence, unequal pilot and power allocation is possible in LTE but maybe not as easy to implement as described above. 5G has a more flexible frame structure but supports the same arrangements as LTE.
In summary, uplink pilots and data can be transmitted at different power levels, and this flexibility can be utilized to improve the performance in Massive MIMO. It does, however, require that the pilots are arranged in practically suitable ways, such as the comb type arrangement.
Drones could shape the future of technology, especially if provided with reliable command and control (C&C) channels for safe and autonomous flying, and high-throughput links for multi-purpose live video streaming. Some months ago, Prabhu Chandhar’s guest post discussed the advantages of using massive MIMO to serve drone – or unmanned aerial vehicle (UAV) – users. More recently, our Paper 1 and Paper 2 have quantified such advantages under the realistic network conditions specified by the 3GPP. While demonstrating that massive MIMO is instrumental in enabling support for UAV users, our works also show that merely upgrading existing base stations (BS) with massive MIMO might not be enough to provide a reliable service at all UAV flying heights. Indeed, hardware solutions need to be complemented with signal processing enhancements through all communications phases, namely, 1) UAV cell selection and association, 2) downlink BS-to-UAV transmissions, and 3) uplink UAV-to-BS transmissions. These are outlined below.
1. UAV cell selection and association
As depicted in Figure 1(a), most existing cellular BSs create a fixed beampattern towards the ground. Thanks to this, ground users tend to perceive a strong signal strength from nearby BSs, which they use for connecting to the network. Instead, aerial users such as the red drone in Figure 1(a) only receive weak sidelobe-generated signals from a nearby BS when flying above it. This results in a deployment planning issue as illustrated in Figure 1(b), where due to the radiation of a strong sidelobe, the tri-sector BSs located in the origin can be the preferred server for far-flung UAVs (red spots). Consequently, these UAVs might experience strong interference, since they perceive signals from a multiplicity of BSs with similar power.
On the other hand, thanks to their capability of beamforming the synchronization signals used for user association, massive MIMO systems ensure that aerial users generally connect to a nearby BS. This optimized association enhances the robustness of the mobility procedures, as well as the downlink and uplink data phases.
2. Downlink BS-to-UAV transmissions
During the downlink data phase, UAV users are very sensitive to the strong inter-cell interference generated from a plurality of BSs, which are likely to be in line-of-sight. This may result in performance degradation, preventing UAVs from receiving critical C&C information, which has an approximate rate requirement of 60-100 kbps. Indeed, Figure 2 shows how conventional cellular networks (‘SU’) can only guarantee 100 kbps to a mere 6% of the UAVs flying at 150 meters. A conventional massive MIMO system (‘mMIMO’) enhances the data rates, albeit only 33% of the UAVs reach 100 kbps when they fly at 300 meters. This is due to a well-known effect: pilot contamination. Such an effect is particularly severe in scenarios with UAV users, since they can create strong uplink interference to many line-of-sight BSs simultaneously. In contrast, the pilot contamination decays much faster with distance for ground UEs.
In a nutshell, Figure 2 tells us that complementing conventional massive MIMO with explicit inter-cell interference suppression (‘mMIMO w/ nulls’) is essential when supporting high UAVs. In a ‘mMIMO w/ nulls’ system, BSs incorporate additional signal processing features that enable them to perform a twofold task. First, leveraging channel directionality, BSs can spatially separate non-orthogonal pilots transmitted by different UAVs. Second, by dedicating a certain number of spatial degrees of freedom to place radiation nulls, BSs can mitigate interference on the directions corresponding to users in other cells that are most vulnerable to the BS’s interference. Indeed, these additional capabilities dramatically increase the percentage of UAVs that meet the 100 kbps requirement when these are flying at 300 m, from 33% (‘mMIMO’) to a whopping 98% (‘mMIMO w/ nulls’).
3. Uplink UAV-to-BS transmissions
Unlike the downlink, where UAVs should be protected to prevent a significant performance degradation, it is the ground users who we should care about in the uplink. This is because line-of-sight UAVs can generate strong interference towards many BSs, therefore overwhelming the weaker signals transmitted by non-line-of-sight ground users. The consequences of such a phenomenon are illustrated in Figure 3, where the uplink rates of ground users plummet as the number of UAVs increases.
Again, ‘mMIMO w/nulls’ – incorporating additional space-domain inter-cell interference suppression capabilities – can solve the above issue and guarantee a better performance for legacy ground users.
Overall, the efforts towards realizing aerial wireless networks are just commencing, and massive MIMO will likely play a key role. In the exciting era of fly-and-connect, we must revisit our understanding of cellular networks and develop novel architectures and techniques, catering not only for roads and buildings, but also for the sky.
Pilot contamination used to be seen as the key issue with the Massive MIMO technology, but thanks to a large number of scientific papers we now know fairly well how to deal with it. I outlined the main approaches to mitigate pilot contamination in a previous blog post and since then the paper Massive MIMO has unlimited capacity has also been picked up by science news channels.
When reading papers on pilot (de)contamination written by many different authors, I’ve noticed one recurrent issue: the mean-squared error (MSE) is used to measure the level of pilot contamination. A few papers only plot the MSE, while most papers contain multiple MSE plots and then one or two plots with bit-error-rates or achievable rates. As I will explain below, the MSE is a rather poor measure of pilot contamination since it cannot distinguish between noise and pilot contamination.
A simple example
Suppose the desired uplink signal is received with power and is disturbed by noise with power and interference from another user with power . By varying the variable between 0 and 1 in this simple example, we can study how the performance changes when moving power from the noise to the interference, and vice versa.
By following the standard approach for channel estimation based on uplink pilots (see Fundamentals of Massive MIMO), the MSE for i.i.d. Rayleigh fading channels is
which is independent of and, hence, does not care about whether the disturbance comes from noise or interference. This is rather intuitive since both the noise and interference are additive i.i.d. Gaussian random variables in this example. The important difference appears in the data transmission phase, where the noise takes a new independent realization and the interference is strongly correlated with the interference in the pilot phase, because it is the product of a new scalar signal and the same channel vector.
To demonstrate the important difference, suppose maximum ratio combining is used to detect the uplink data. The effective uplink signal-to-interference-and-noise-ratio (SINR) is
where is the number of antennas. For any given MSE value, it now matters how it was generated, because the SINR is a decreasing function of . The term is due to pilot contamination (it is often called coherent interference) and is proportional to the interference power . When the number of antennas is large, it is far better to have more noise during the pilot transmission than more interference!
Since the MSE cannot separate noise from interference, we should not try to measure the effectiveness of a “pilot decontamination” algorithm by considering the MSE. An algorithm that achieves a low MSE can potentially be mitigating the noise, leaving the interference unaffected. If that is the case, the pilot contamination term will remain. The MSE has been used far too often when evaluating pilot decontamination algorithms, and a few papers (I found three while writing this post) did only consider the MSE, which opens the door for questioning their conclusions.
The right methodology is to compute the SINR (or some other performance indicator in the data phase) with the proposed pilot decontamination algorithm and with competing algorithms. In that case, we can be sure that the full impact of the pilot contamination is taken into account.
I this video clip I talk about an important lesson learned while working on Massive MIMO:
And in this video clip I talk more in general about our book, Fundamentals of Massive MIMO:
The Norwegian startup company Super Radio has during the past year made several channel measurement campaigns for Massive MIMO for land-to-sea communications, within a project called MAMIME (LTE, WIFI and 5G Massive MIMO Communications in Maritime Propagation Environments). There are several other companies and universities involved in the project.
The maritime propagation environment is clearly different from the urban and suburban propagation environments that are normally modeled in wireless communications. For example, the ground plane consists of water, and the sea waves are likely to reflect the radio waves in a different way than the hard surface on land. Except for islands, there won’t be many other objects that can create multipath propagation in the sea. Hence, a strong line-of-sight path is key in these use cases.
The MAMIME project is using a 128-antenna horizontal array, which is claimed to be the largest in the world. Such an array can provide narrow horizontal beams, but no elevation beamforming – which is probably not needed since the receivers will all be at the sea level. The array consists of 4 subarrays which each has a dimension of 1070 x 630 mm. Frequencies relevant for LTE and WiFi have been considered so far. The goal of the project is to provide “extremely high throughputs, stability and long coverage” for maritime communications. I suppose that range extension and spatial multiplexing of multiple ships is what this type of Massive MIMO system can achieve, as compared to a conventional system.
A first video about the project was published in December 2017:
Now a second video has been released, see below. Both videos have been recorded outside Trondheim, but Kan Yang at Super Radio told me that further measurements outside Oslo will soon be conducted, with focus on LTE Massive MIMO.