For this reason, I used to say that outdoor musical festivals, where a crowd of 100,000 people gather to see their favorite bands, would be a first deployment scenario for Massive MIMO. This is fairly similar to what now has happened: The Russian telecom operator MTS has deployed more than 40 state-of-the-art LTE sites with Massive MIMO functionality in seven cities where the 2018 FIFA World Cup in football is currently taking place. The base stations are deployed to cover the stadiums, fan zones, airports, train stations, and major parks/squares; in other words, the places where huge crowds of football fans are expected.
In the press release, Andrei Ushatsky, Vice President of MTS, says:
“This launch is one of Europe’s largest Massive MIMO deployments, covering seven Russian cities, and is a major contribution by MTS in the preparation of the country’s infrastructure for the global sporting event of the year. Our Massive MIMO technology, using Ericsson equipment, significantly increases network capacity, allowing tens of thousands of fans together in one place to enjoy high-speed mobile internet without any loss in speed or quality.”
While this is one of the first major deployments of Massive MIMO, more will certainly follow in the coming years. More research into the development and implementation of advanced signal processing and resource management schemes will also be needed for many years to come – this is just the beginning.
]]>Clearly, we will see a larger focus on TDD in future networks, but there are some traditional disadvantages with TDD that we need to bear in mind when designing these networks. I describe the three main ones below.
Link budget
Even if we allocate the same amount of time-frequency resources to uplink and downlink in TDD and FDD operation, there is an important difference. We transmit over half the bandwidth all the time in FDD, while we transmit over the whole bandwidth half of the time in TDD. Since the power amplifier is only active half of the time, if the peak power is the same, the average radiated power is effectively cut in half. This means that the SNR is 3 dB lower in TDD than in FDD, when transmitting at maximum peak power.
Massive MIMO systems are generally interference-limited and uses power control to assign a reduced transmit power to most users, thus the impact of the 3 dB SNR loss at maximum peak power is immaterial in many cases. However, there will always be some unfortunate low-SNR users (e.g., at the cell edge) that would like to communicate at maximum peak power in both uplink and downlink, and therefore suffer from the 3 dB SNR loss. If these users are still able to connect to the base station, the beamforming gain provided by Massive MIMO will probably more than compensate for the loss in link budget as compared single-antenna systems. One can discuss if it should be the peak power or average radiated power that is constrained in practice.
Guard period
Everyone in the cell should operate in uplink and downlink mode at the same time in TDD. Since the users are at different distances from the base station and have different delay spreads, they will receive the end of the downlink transmission block at different time instances. If a cell center user starts to transmit in the uplink immediately after receiving the full downlink block, then users at the cell edge will receive a combination of the delayed downlink transmission and the cell center users’ uplink transmissions. To avoid such uplink-downlink interference, there is a guard period in TDD so that all users wait with uplink transmission until the outmost users are done with the downlink.
In fact, the base station gives every user a timing bias to make sure that when the uplink commences, the users’ uplink signals are received in a time-synchronized fashion at the base station. Therefore, the outmost users will start transmitting in the uplink before the cell center users. Thanks to this feature, the largest guard period is needed when switching from downlink to uplink, while the uplink to downlink switching period can be short. This is positive for Massive MIMO operation since we want to use uplink CSI in the next downlink block, but not the other way around.
The guard period in TDD must become larger when the cell size increases, meaning that a larger fraction of the transmission resources disappears. Since no guard periods are needed in FDD, the largest benefits of TDD will be seen in urban scenarios where the macro cells have a radius of a few hundred meters and the delay spread is short.
Inter-cell synchronization
We want to avoid interference between uplink and downlink within a cell and the same thing applies for the inter-cell interference. The base stations in different cells should be fairly time-synchronized so that the uplink and downlink take place at the same time; otherwise, it might happen that a cell-edge user receives a downlink signal from its own base station and is interfered by the uplink transmission from a neighboring user that connects to another base station.
This can also be an issue between telecom operators that use neighboring frequency bands. There are strict regulations on the permitted out-of-band radiation, but the out-of-band interference can anyway be larger than the desired inband signal if the interferer is very close to the receiving inband user. Hence, it is preferred that the telecom operators are also synchronizing their switching between uplink and downlink.
Summary
Massive MIMO will bring great gains in spectral efficiency in future cellular networks, but we should not forget about the traditional disadvantages of TDD operation: 3 dB loss in SNR at peak power transmission, larger guard periods in larger cells, and time synchronization between neighboring base stations.
]]>Fortunately, the 1600 bit/sample that are effectively produced by 100 16-bit ADCs are much more than what is needed to communicate at practical SINRs. For this reason, there is plenty of research on Massive MIMO base stations equipped with lower-resolution ADCs. The use of 1-bit ADCs has received particular attention. Some good paper references are provided in a previous blog post: Are 1-bit ADCs sufficient? While many early works considered narrowband channels, recent papers (e.g., Quantized massive MU-MIMO-OFDM uplink) have demonstrated that 1-bit ADCs can also be used in practical frequency-selective wideband channels. I’m impressed by the analytical depth of these papers, but I don’t think it is practically meaningful to use 1-bit ADCs.
Do we really need 1-bit ADCs?
I think the answer is no in most situations. The reason is that ADCs with a resolution of around 6 bits strike a much better balance between communication performance and power consumption. The state-of-the-art 6-bit ADCs are already very energy-efficient. For example, the paper “A 5.5mW 6b 5GS/S 4×-lnterleaved 3b/cycle SAR ADC in 65nm CMOS” from ISSCC 2015 describes a 6-bit ADC that consumes 5.5 mW and has a huge sampling rate of 5 Gsample/s, which is sufficient even for extreme mmWave applications with 1 GHz of bandwidth. In a base station equipped with 100 of these 6-bit ADCs, less than 1 W is consumed by the ADCs. That will likely be a negligible factor in the total power consumption of any base station, so what is the point in using a lower resolution than that?
The use of 1-bit ADCs comes with a substantial loss in communication rate. In contrast, there is a consensus that Massive MIMO with 3-5 bits per ADC performs very close to the unquantized case (see Paper 1, Paper 2, Paper 3, Paper 4, Paper 5). The same applies for 6-bit ADCs, which provide an additional margin that protects against strong interference. Note that there is nothing magical with 6-bit ADCs; maybe 5-bit or 7-bit ADCs will be even better, but I don’t think it is meaningful to use 1-bit ADCs.
Will 1-bit ADCs ever become useful?
To select a 1-bit ADC, instead of an ADC with higher resolution, the energy consumption of the receiving device must be extremely constrained. I don’t think that will ever be the case in base stations, because the power amplifiers are dominating their energy consumption. However, the case might be different for internet-of-things devices that are supposed to run for ten years on the same battery. To make 1-bit ADCs meaningful, we need to greatly simplify all the other hardware components as well. One potential approach is to make a dedicated spatial-temporal waveform design, as described in this paper.
]]>There were also lots of other (non-Massive MIMO) interesting things: UAV connectivity, sparsity… and a great deal of questions and discussion on how machine learning could be leveraged, more about that at a later point in time.
Contemporary multiantenna base stations for cellular communications are equipped with 2-8 antennas, which are deployed along a horizontal line. One example is a uniform linear array (ULA), as illustrated in Figure 1 below, where the antenna spacing is uniform. All the antennas in the ULA have the same physical down-tilt, with respect to the ground, and a fixed radiation pattern and directivity.
By sending the same signal from all antennas, but with different phase-shifts, we can steer beams in different angular directions and thereby make the directivity of the radiated signal different from the directivity of the individual antennas. Since the antennas are deployed on a one-dimensional horizontal line in this example, the ULA can only steer beams in the two-dimensional (2D) azimuth plane as illustrated in Figure 1. The elevation angle is the same for all beams, which is why this is called 2D beamforming. The beamwidth in the azimuth domain shrinks the more antennas are deployed. If the array is used for multiuser MIMO, then multiple beams with different azimuth angles are created simultaneously, as illustrated by the colored beams in Figure 1.
If we would rotate the ULA so that the antennas are instead deployed at different heights above the ground, then the array can instead steer beams in different elevation angles. This is illustrated in Figure 2. Note that this is still a form of 2D beamforming since every beam will have the same directivity with respect to the azimuth plane. This antenna array can be used to steer beams towards users at different floors of a building. It is also useful to serve flying objects, such as UAVs, jointly with ground users. The beamwidth in the elevation domain shrinks the more antennas are deployed.
If we instead deploy multiple ULAs on top of each other, it is possible to control both the azimuth and elevation angle of a beam. This is called 3D beamforming and is illustrated in Figure 3 using a planar array with a “massive” number of antennas. This gives the flexibility to not only steer beams towards different buildings but also towards different floors of these buildings, to provide a beamforming gain wherever the user is in the coverage area. It is not necessary to have many antennas to perform 3D beamforming – it is basically enough to have three antennas deployed in a triangle. However, as more antennas are added, the beams become narrower and easier to jointly steer in specific azimuth-elevation directions. This increases the array gain and reduces the interference between beams directed to different users, as illustrated by the colors in Figure 3.
The detailed answer to the question “3D Beamforming, is that Massive MIMO?” is as follows. Massive MIMO and 3D beamforming are two different concepts. 3D beamforming can be performed with few antennas and Massive MIMO can be deployed to only perform 2D beamforming. However, Massive MIMO and 3D beamforming is a great combination in many applications; for example, to spatially multiplex many users in a city with high-rise buildings. One should also bear in mind that, in general, only a fraction of the users are located in line-of-sight so the formation of angular beams (as shown above) might be of limited importance. The ability to control the array’s radiation pattern in 3D is nonetheless helpful to control the multipath environment such that the many signal components add constructively at the location of the intended receiver.
]]>It looks to me now that two of these speculations were wrong:
Have you found any more? Let me know. The knowledge in the field continues to evolve.
]]>Each antenna type has a predefined radiation pattern, which describes its inherent directivity; that is, how the gain of the emitted signal differs in different angular directions. An ideal isotropic antenna has no directivity, but a practical antenna always has a certain directivity, measured in dBi. For example, a half-wavelength dipole antenna has 2.15 dBi, which means that there is one angular direction in which the emitted signal is 2.15 dB stronger than it would be with a corresponding isotropic antenna. On the other hand, there are other angular directions in which the emitted signal is weaker. This is not a problem as long as there will not be any receivers in those directions.
In cellular communications, we are used to deploying large vertical antenna panels that cover a 120 degree horizontal sector and have a strong directivity of 15 dBi or more. Such a panel is made up of many small radiating elements, each having a directivity of a few dBi. By feeding them with the same input signal, a higher dBi is achieved for the panel. For example, if the panel consists of 8 patch antenna elements, each having 7 dBi, then you get a 7+10·log_{10}(8) = 16 dBi antenna.
The picture above shows a real LTE site that I found in Nanjing, China, a couple of years ago. Looking at it from above, the site is structured as illustrated to the right. The site consists of three sectors, each containing a base station with four vertical panels. If you would look inside one of the panels, you will (probably) find 8 cross-polarized vertically stacked radiating elements, as illustrated in Figure 1. There are two RF input signals per panel, one per polarization, thus each panel acts as two antennas. This is how LTE with 8TX-sectors is deployed: 4 panels with dual polarization per base station.
At the exemplified LTE site, there is a total of 8·8·3 =192 radiating elements, but only 8·3 = 24 antennas. This disparity can lead to a lot of confusion. The Massive MIMO version of the exemplified LTE site may have the same form factor, but instead of 24 antennas with 16 dBi, you would have 192 antennas with 7 dBi. More precisely, you would connect each of the existing radiating elements to a separate RF input signal to create a larger number of antennas. Therefore, I suggest to use the following antenna definition from the book Massive MIMO Networks:
Definition: An antenna consists of one or more radiating elements (e.g., dipoles) which are fed by the same RF signal. An antenna array is composed of multiple antennas with individual RF chains.
Note that, with this definition, an array that uses analog beamforming (e.g., a phased array) only constitutes one antenna. It is usually called an adaptive antenna since the radiation pattern can be changed over time, but it is nevertheless a single antenna. Massive MIMO for sub-6 GHz frequencies is all about adding RF chains (also known as antenna ports), while not necessarily adding more radiating elements than in a contemporary system.
What is the purpose of having more RF chains?
With more RF chains, you have more degrees of freedom to modify the radiation pattern of the transmitted signal based on where the receiver is located. When transmitting a precoded signal to a single user, you adjust the phases of the RF input signals to make them all combine constructively at the intended receiver.
The maximum antenna/array gain is the same when using one 16 dBi antenna and when using 8 antennas with 7 dBi. In the first case, the radiation pattern is usually static and thus only a line-of-sight user located in the center of the cell sector will obtain this gain. However, if the antenna is adaptive (i.e., supports analog beamforming), the main lobe of the radiation pattern can be also steered towards line-of-sight users located in other angular directions. This feature might be sufficient for supporting the intended single-user use-cases of mm-wave technology (see Figure 4 in this paper).
In contrast, in the second case, we can adjust the radiation pattern by 8-antenna precoding to deliver the maximum gain to any user in the sector. This feature is particularly important for non-line-of-sight users (e.g., indoor use-cases), for which the signals from the different radiating elements will likely be received with “random” phase shifts and therefore add non-constructively, unless we compensate for the phases by digital precoding.
Note that most papers on Massive MIMO keep the antenna gain constant when comparing systems with different number of antennas. There is nothing wrong with doing that, but one cannot interpret the single-antenna case in such a study as a contemporary system.
Another, perhaps more important, feature of having multiple RF chains is that we can spatially multiplex several users when having multiple antennas. For this you need at least as many RF inputs as there are users. Each of them can get the full array gain and the digital precoding can be also used to avoid inter-user interference.
]]>Since the spectral efficiency (bit/s/Hz) and many other performance metrics of interest depend on the SNR, and not the individual values of the three parameters, it is a common practice to normalize one or two of the parameters to unity. This habit makes it easier to interpret performance expressions, to select reasonable SNR ranges, and to avoid mistakes in analytical derivations.
There are, however, situations when the absolute value of the transmitted/received signal power matters, and not the relative value with respect to the noise power, as measured by the SNR. In these situations, it is easy to make mistakes if you use normalized parameters. I see this type of errors far too often, both as a reviewer and in published papers. I will give some specific examples below, but I won’t tell you who has made these mistakes, to not point the finger at anyone specifically.
Wireless energy transfer
Electromagnetic radiation can be used to transfer energy to wireless receivers. In such wireless energy transfer, it is the received signal energy that is harvested by the receiver, not the SNR. Since the noise power is extremely small, the SNR is (at least) a billion times larger than the received signal power. Hence, a normalization error can lead to crazy conclusions, such as being able to transfer energy at a rate of 1 W instead of 1 nW. The former is enough to keep a wireless transceiver on continuously, while the latter requires you to harvest energy for a long time period before you can turn the transceiver on for a brief moment.
Energy efficiency
The energy efficiency (EE) of a wireless transmission is measured in bit/Joule. The EE is computed as the ratio between the data rate (bit/s) and the power consumption (Watt=Joule/s). While the data rate depends on the SNR, the power consumption does not. The same SNR value can be achieved over a long propagation distance by using high transmit power or over a short distance by using a low transmit power. The EE will be widely different in these cases. If a “normalized transmit power” is used instead of the actual transmit power when computing the EE, one can get EEs that are one million times smaller than they should be. As a rule-of-thumb, if you compute things correctly, you will get EE numbers in the range of 10 kbit/Joule to 10 Mbit/Joule.
Noise power depends on the bandwidth
The noise power is proportional to the communication bandwidth. When working with a normalized noise power, it is easy to forget that a given SNR value only applies for one particular value of the bandwidth.
Some papers normalize the noise variance and channel gain, but then make the SNR equal to the unnormalized transmit power (measured in W). This may greatly overestimate the SNR, but the achievable rates might still be in the reasonable range if you operate the system in an interference-limited regime.
Some papers contain an alternative EE definition where the spectral efficiency (bit/s/Hz) is divided by the power consumption (Joule/s). This leads to the alternative EE unit bit/Joule/Hz. This definition is not formally wrong, but gives the misleading impression that one can multiply the EE value with any choice of bandwidth to get the desired number of bit/Joule. That is not the case since the SNR only holds for one particular value of the bandwidth.
Knowing when to normalize
In summary, even if it is convenient to normalize system parameters in wireless communications, you should only do it if you understand when normalization is possible and when it is not. Otherwise, you can make embarrassing mistakes, such as submitting a paper where the results are six orders of magnitude wrong. And, unfortunately, there are several such papers that have been published and these create a bad circle by tricking others into making the same mistakes.
]]>Massive MIMO in Sub-6 GHz and mmWave: Physical, Practical, and Use-Case Differences
]]>Christopher Mollén recently defended his doctoral thesis entitled High-End Performance with Low-End Hardware: Analysis of Massive MIMO Base Station Transceivers. In the following video, he explains the basics of how the non-linear distortion from Massive MIMO transceivers is radiated in space.
]]>While there is no theoretical upper limit on how spectrally efficient Massive MIMO can become when adding more antennas, we need to set some reasonable first goals. Currently, many companies are trying to implement analog beamforming in a cost-efficient manner. That will allow for narrow beamforming, but not spatial multiplexing.
By following the methodology in Section 3.3.3 in Fundamentals of Massive MIMO, a simple formula for the downlink spectral efficiency is:
(1)
where is the number of base-station antennas, is the number of spatially multiplexed users, is the quality of the channel estimates, and is the number of channel uses per channel coherence block. For simplicity, I have assumed the same pathloss for all the users. The variable is the nominal signal-to-noise ratio (SNR) of a user, achieved when . Eq. (1) is a rigorous lower bound on the sum capacity, achieved under the assumptions of maximum ratio precoding, i.i.d. Rayleigh fading channels, and equal power allocation. With better processing schemes, one can achieve substantially higher performance.
To get an even simpler formula, let us approximate (1) as
(2)
by assuming a large channel coherence and negligible noise.
What does the formula tell us?
If we increase while is fixed , we will observe a logarithmic improvement in spectral efficiency. This is what analog beamforming can achieve for and, hence, I am a bit concerned that the industry will be disappointed with the gains that they will obtain from such beamforming in 5G.
If we instead increase and jointly, so that stays constant, then the spectral efficiency will grow linearly with the number of users. Note that the same transmit power is divided between the users, but the power-reduction per user is compensated by increasing the array gain so that the performance per user remains the same.
The largest gains come from spatial multiplexing
To give some quantitative numbers, consider a baseline system with and that achieves 2 bit/s/Hz. If we increase the number of antennas to , the spectral efficiency will become 5.6 bit/s/Hz. This is the gain from beamforming. If we also increase the number of users to users, we will get 32 bit/s/Hz. This is the gain from spatial multiplexing. Clearly, the largest gains come from spatial multiplexing and adding many antennas is a necessary way to facilitate such multiplexing.
This analysis has implicitly assumed full digital beamforming. An analog or hybrid beamforming approach may achieve most of the array gain for . However, although hybrid beamforming allows for spatial multiplexing, I believe that the gains will be substantially smaller than with full digital beamforming.
]]>Will the futuristic-sounding holographic beamforming make Massive MIMO obsolete? Not at all, because this is a new implementation architecture, not a new beamforming scheme or spatial multiplexing method. According to the company’s own white paper, the goal is to deliver “a new dynamic beamforming technique using a Software Defined Antenna (SDA) that employs the lowest C-SWaP (Cost, Size, Weight, and Power)“. Simply speaking, it is a way to implement a phased array in a thin, conformable, and affordable way. The PESAs are constructed using high volume commercial off-the-shelf components. Each PESA has a single RF-input and a distribution network that is used to vary the directivity of the beamforming. With a single RF-input, only single-user single-stream beamforming is possible. As explained in Section 1.3 in my recent book, such single-user beamforming can improve the SINR, but the rate only grows logarithmically with the number of antennas. Nevertheless, cost-efficient single-stream beamforming from massive arrays is one of the first issues that the industry tries to solve, in preparation for a full-blown Massive MIMO deployment.
The largest gains from multiple antenna technologies come from spatial multiplexing of many users, using a Massive MIMO topology where the inter-user interference is reduced by making the beams narrower as more users are to be multiplexed. The capacity then grows linearly with the number of users, as also explained in Section 1.3 of my book.
Can holographic beamforming be used to implement Massive MIMO with spatial multiplexing of tens of users? Yes, similar to hybrid beamforming, one could deploy an array of PESAs, where each PESA is used to transmit to one user. Eric J. Black, CTO and founder of Pivotal Commware, refers to this as “sub-aperture based SDMA“. If you want the capability of serving ten users simultaneously, you will need ten PESAs.
If the C-SWaP of holographic beamforming is as low as claimed, the technology might have the key to cost-efficient deployment of Massive MIMO. The thin and conformable form factor also makes me think about the recent concept of Distributed Large Intelligent Surface, where rooms are decorated with small antenna arrays to provide seamless connectivity.
]]>If you search at IEEEXplore, the origin of the name remains puzzling. The earliest papers are “Massive MIMO: How many antennas do we need?” by Hoydis/ten Brink/Debbah and “Achieving Large Spectral Efficiency with TDD and Not-so-Many Base-Station Antennas” by Huh/Giuseppe Caire/Papadopoulos/Ramprashad, both from 2011. However, these papers are referring to Marzetta’s seminal paper, which doesn’t call it “Massive MIMO”.
If you instead read the news reports by ZDNet and Silicon from the 2010 Bell Labs Open Days in Paris, the origin of “Massive MIMO” becomes clearer. Marzetta presented his concept and reportedly said that “We haven’t been able to come up with a catchy name”, but told ZDNet that “massive MIMO” and “large-scale MIMO” were two candidates. To the Massive MIMO blog, Marzetta now explains why he initially abandoned these potential names, in favor for LSAS:
When I explained the concept to the Bell Labs Director of Research, he commented that it didn’t sound at all like MIMO to him. He recommended strongly that I think of a name that didn’t contain the acronym “MIMO”, hence, LSAS. Eventually (after everyone else called it Massive MIMO) I abandoned “LSAS” and started to call it “Massive MIMO”.
In conclusion, the Massive MIMO name came originally from Marzetta, who used it when first describing the concept to the public, but the name was popularized by other researchers.
]]>Interestingly, some radio resource allocation problems that appear to have exponential complexity can be relaxed to a form that is much easier to solve – this is what I call “relax and conquer”. In optimization theory, relaxation means that you widen the set of permissible solutions to the problem, which in this context means that the discrete optimization variables are replaced with continuous optimization variables. In many cases, it is easier to solve optimization problems with variables that take values in continuous sets than problems with a mix of continuous and discrete variables.
A basic example of this principle arises when communicating over a single-user MIMO channel. To maximize the achievable rate, you first need to select how many data streams to spatially multiplex and then determine the precoding and power allocation for these data streams. This appears to be a mixed-integer optimization problem, but Telatar showed in his seminal paper that it can be solved by the water-filling algorithm. More precisely, you relax the problem by assuming that the maximum number of data streams are transmitted and then you let the solution to a convex optimization problem determine how many of the data streams that are assigned non-zero power; this is the optimal number of data streams. Despite the relaxation, the global optimum to the original problem is obtained.
There are other, less known examples of the “relax and conquer” method. Some years ago, I came across the paper “Jointly optimal downlink beamforming and base station assignment“, which has received much less attention than it deserves. The UE-BS association problem, considered in this paper, is non-trivial since some BSs might have many more UEs in their vicinity than other BSs. Nevertheless, the paper shows that one can solve the problem by first relaxing it so that all BSs transmit to all the UEs. The author formulates a relaxed optimization problem where the beamforming vectors (including power allocation) are selected to satisfy each UEs’ SINR constraint, while minimizing the total transmit power. This problem is solved by convex optimization and, importantly, the optimal solution is always such that each UE only receives a non-zero signal power from one of the BSs. Hence, the seemingly difficult combinatorial UE-BS association problem is relaxed to a convex optimization problem, which provides the optimal solution to the original problem!
I have reused this idea in several papers. The first example is “Massive MIMO and Small Cells: Improving Energy Efficiency by Optimal Soft-cell Coordination“, which considers a similar setup but with a maximum transmit power per BS. The consequence of including this practical constraint is that it might happen that some UEs are served by multiple BSs at the optimal solution. These BSs send different messages to the UE, which decode them by successive interference cancelation, thus the solution is still practically achievable.
One practical weakness with the two aforementioned papers is that they take small-scale fading realizations into account in the optimization, thus the problem must be solved once per coherence interval, requiring extremely high computational power. More recently, in the paper “Joint Power Allocation and User Association Optimization for Massive MIMO Systems“, we applied the same “relax and conquer” method to Massive MIMO, but targeting lower bounds on the downlink ergodic capacity. Since the capacity bounds are valid as long as the channel statistics are fixed (and the same UEs are active), our optimized BS-UE association can be utilized for a relatively long time period. This makes the proposed algorithm practically relevant, in contrast to the prior works that are more of academic interest.
Another example of the “relax and conquer” method is found in the paper “Joint Pilot Design and Uplink Power Allocation in Multi-Cell Massive MIMO Systems”. We consider the assignment of orthogonal pilot sequences to users, which appears to be a combinatorial problem. Instead of assigning a pilot sequence to each UE and then allocate power, we relax the problem by allowing each user to design its own pilot sequence, which is a linear combination of the original orthogonal sequences. Hence, a pair of UEs might have partially overlapping sequences, instead of either identical or orthogonal sequences (as in the original problem). The relaxed problem even allows for pilot contamination within a cell. The sequences are then optimized to maximize the max-min performance. The resulting problem is non-convex, but the combinatorial structure has been relaxed so that there are only optimization variables from continuous sets. A local optimum to the joint pilot assignment and power control problem is found with polynomial complexity, using standard methods from the optimization literature. The optimization might not lead to a set of orthogonal pilot sequences, but the solution is practically implementable and gives better performance.
]]>In most cases, the receiver only has imperfect CSI and then it is harder to measure the performance. In fact, it took me years to understand this properly. To explain the complications, consider the uplink of a single-cell Massive MIMO system with single-antenna users and antennas at the base station. The received -dimensional signal is
where is the unit-power information signal from user , is the fading channel from this user, and is unit-power additive Gaussian noise. In general, the base station will only have access to an imperfect estimate of , for
Suppose the base station uses to select a receive combining vector for user . The base station then multiplies it with to form a scalar that is supposed to resemble the information signal :
From this expression, a common mistake is to directly say that the SINR is
which is obtained by computing the power of each of the terms (averaged over the signal and noise), and then claim that is an achievable rate (where the expectation is with respect to the random channels). You can find this type of arguments in many papers, without proof of the information-theoretic achievability of this rate value. Clearly, is an SINR, in the sense that the numerator contains the total signal power and the denominator contains the interference power plus noise power. However, this doesn’t mean that you can plug into “Shannon’s capacity formula” and get something sensible. This will only yield a correct result when the receiver has perfect CSI.
A basic (but non-conclusive) test of the correctness of a rate expression is to check that the receiver can compute the expression based on its available information (i.e., estimates of random variables and deterministic quantities). Any expression containing fails this basic test since you need to know the exact channel realizations to compute it, although the receiver only has access to the estimates.
What is the right approach?
Remember that the SINR is not important by itself, but we should start from the performance metric of interest and then we might eventually interpret a part of the expression as an effective SINR. In Massive MIMO, we are usually interested in the ergodic capacity. Since the exact capacity is unknown, we look for rigorous lower bounds on the capacity. There are several bounding techniques to choose between, whereof I will describe the two most common ones.
The first lower bound on the uplink capacity can be applied when the channels are Gaussian distributed and are the MMSE estimates with the corresponding estimation error covariance matrices . The ergodic capacity of user is then lower bounded by
Note that this expression can be computed at the receiver using only the available channel estimates (and deterministic quantities). The ratio inside the logarithm can be interpreted as an effective SINR, in the sense that the rate is equivalent to that of a fading channel where the receiver has perfect CSI and an SNR equal to this effective SINR. A key difference from is that only the part of the desired signal that is received along the estimated channel appears in the numerator of the SINR, while the rest of the desired signal appears as in the denominator. This is the price to pay for having imperfect CSI at the receiver, according to this capacity bound, which has been used by Hoydis et al. and Ngo et al., among others.
The second lower bound on the uplink capacity is
which can be applied for any channel fading distribution. This bound provides a value close to when there is substantial channel hardening in the system, while will greatly underestimate the capacity when varies a lot between channel realizations. The reason is that to obtain this bound, the receiver detects the signal as if it is received over a non-fading channel with gain (which is deterministic and thus known in theory, and easy to measure in practice), but there are no approximations involved so is always a valid bound.
Since all the terms in are deterministic, the receiver can clearly compute it using its available information. The main merit of is that the expectations in the numerator and denominator can sometimes be computed in closed form; for example, when using maximum-ratio and zero-forcing combining with i.i.d. Rayleigh fading channels or maximum-ratio combining with correlated Rayleigh fading. Two early works that used this bound are by Marzetta and by Jose et al..
The two uplink rate expressions can be proved using capacity bounding techniques that have been floating around in the literature for more than a decade; the main principle for computing capacity bounds for the case when the receiver has imperfect CSI is found in a paper by Medard from 2000. The first concise description of both bounds (including all the necessary conditions for using them) is found in Fundamentals of Massive MIMO. The expressions that are presented above can be found in Section 4 of the new book Massive MIMO Networks. In these two books, you can also find the right ways to compute rigorous lower bounds on the downlink capacity in Massive MIMO.
In conclusion, to avoid mistakes, always start with rigorously computing the performance metric of interest. If you are interested in the ergodic capacity, then you start from one of the canonical capacity bounds in the above-mentioned books and verify that all the required conditions are satisfied. Then you may interpret part of the expression as an SINR.
]]>These arguments, it turned out, all proved to be wrong. In 2017, Massive MIMO was the main physical-layer technology under standardization for 5G, and it is unlikely that any serious future cellular wireless communications system would not have Massive MIMO as a main technology component.
But Massive MIMO is more than a groundbreaking technology for wireless communications: it is also an elegant and mathematically rigorous approach to teaching wireless communications. In the moderately-large number-of-antennas regime, our closed-form capacity bounds become convenient proxies for the link performance achievable with practical coding and modulation.
These expressions take into account the effects of all significant physical phenomena: small-scale and large-scale fading, intra- and inter-cell interference, channel estimation errors, pilot reuse (also known as pilot contamination) and power control. A comprehensive analytical understanding of these phenomena simply has not been possible before, as the corresponding information theory has too complicated for any practical use.
The intended audiences of Fundamentals of Massive MIMO are engineers and students. I anticipate that as graduate courses on the topic become commonplace, our extensive problem set (with solutions) available online will serve as a useful resource to instructors. While other books and monographs will likely appear down the road, focusing on trendier and more recent research, Fundamentals of Massive MIMO distills the theory and facts that will prevail for the foreseeable future. This, I hope, will become its most lasting impact.
To read the preface of Fundamentals of Massive MIMO, click here. You can also purchase the book here.
]]>The full title of my webinar is Massive MIMO for 5G below 6 GHz: Achieving Spectral Efficiency, Link Reliability, and Low-Power Operation. I will cover the basics of Massive MIMO and explain how the technology is not only great for enhancing the broadband access, but also for delivering the link reliability and low-power operation required by the internet of things. I have made sure that the overlap with my previous webinar is small.
If you watch the webinar live, you will have the chance to ask questions. Otherwise, you can view the recording of the webinar afterward. All the webinars in the IEEE 5G Webinar Series are available for anyone to view.
As a final note, I wrote a guest blog post at IEEE ComSoc Technology News in late December. It follows up and my previous blog post about GLOBECOM and is called: The Birth of 5G: What to do next?
]]>
There are many foreseen applications that involve a large number of drones in a limited area such as disaster management, traffic monitoring, crowd management, and crop monitoring. The major communication requirements of most of the drone networks are: several tens of Mbps throughput for streaming high-resolution video, low latency for command and control, highly reliable connectivity in a three-dimensional coverage area, high-mobility support, and simultaneous support for a large number of drones.
The existing wireless systems are unsuitable for communicating with a large number of drones in long-range, high throughput, and high-altitude applications for the following reasons:
For the above-mentioned reasons, instead of borrowing from existing wireless technologies, it would be better to develop a new technology, considering the specific drone networks’ requirements and propagation characteristics. As of now, spectrum allocation and standardization efforts for drone communication networks are in the initial stage of development. This is where Massive MIMO can play a key role. The attractive features of Massive MIMO, such as spatial multiplexing and range extension, can be exploited to design flexible and efficient drone communication systems. 5G is based on the concept of network slicing, where the network can be configured differently depending on the use case. Therefore, it is possible to deploy a variation of 5G for drone communications along with appropriately tilted antenna arrays to provide connectivity to the drones flying at high altitudes.
In our recent papers (1 and 2), we illustrated the use for Massive MIMO for drone communications. From these papers, we make the following observations:
Below are some examples of use cases of Massive MIMO enabled drone communication systems. The technical details of Massive MIMO based system design can be found in this paper. The Massive MIMO design parameters for some of the use cases can be found in this paper.
Drone racing: In recent years, drone racing, also called “the sport of the future”, is becoming popular around the world. In drone racing, low latency is important for drone control, because even a few tens of milliseconds delay might crash the drone when it moves at the speed of 40-50 m/s. Interestingly, in our digital world, analog transmission is used for sending videos from racing drones to the pilots. The reason is that, unlike digital transmission, an analog transmission does not incur any processing delay and the overall latency is about only 15 ms. Currently, the 5.8 GHz band (5650 MHz to 5925 MHz) is used for drone racing. The transmitter and receiver use frequency modulation and it requires 40 MHz frequency separation to avoid cross-talks between neighboring channels. As a result, the number of simultaneous drones in a contest is limited to eight. The video quality is also poor. By using Massive MIMO, several tens of drones can simultaneously participate in a contest and the pilots can enjoy latency-free high-quality video transmission.
Sports streaming: Utilizing drones for sports streaming will change the way we view the sports events. High resolution 4K 360-degree videos taken by multiple drones at different angles can be broadcasted to enable the viewers to have an entirely a new experience. If there are 20 drones covering a sports event, the required sum throughput will be in the order of 10 Gbps. Massive MIMO in the mm-wave frequency band can be used to achieve this high throughput. This can become reality as already there are signs towards the use of drones for covering sports events. For instance, during the 2018 Winter Olympics, drones will be extensively used.
Surveillance/ Search and Rescue/Disaster management: During natural disasters, a network of drones can be quickly deployed to enable the rescue teams to assess the situation in real-time via high-resolution video streaming. Depending on the area to be covered and desired video quality, the sum throughput requirement will be in the order of Gbps. A Massive MIMO array deployed over a ground vehicle or a large aerial vehicle can be used for serving a swarm of drones.
Aerial survey: A swarm of drones can be used for high-resolution aerial imagery of several kilometers of landscape. There are many uses of aerial survey, including state governance, city planning, 3D cartography, and crop monitoring. Massive MIMO can be an enabler for such high throughput and long-range applications.
Backhaul for flying base stations: During emergency situations and heavy traffic conditions, UAVs could be used as flying base stations to provide wireless connectivity to the cellular users. A Massive MIMO base station can act as a high-capacity backhaul to a large number of flying base stations.
Space exploration: Currently, it takes several hours to receive a photo taken by the Curiosity Mars rover. It is possible to use Massive MIMO to reduce the overall transmission delay. For example, by using a massive antenna array deployed in an orbiter (see the above figure), a swarm of drones and rovers roaming on the surface of another planet can send videos and images to earth. The array can be used to spatially multiplex the uplink transmission from the drones (and possibly the rovers) to the orbiter. Note that the distance between the Mars surface and the orbiter is about 400 km. If the drones fly at an altitude of a few hundred meters and spread out over the region with a few hundred kilometers of radius, the angular resolution of the array is sufficient for spatial multiplexing. The array can be used to transmit the collected images and videos to earth by exploiting the array gain. This might sound like a science fiction, but NASA is already developing a 256 element antenna array for future Mars rovers to enable direct communication with the earth.
]]>I attended GLOBECOM in Singapore earlier this week. Since more and more preprints are posted online before conferences, one of the unique features of conferences is to meet other researchers and attend the invited talks and interactive panel discussions. This year I attended the panel “Massive MIMO – Challenges on the Path to Deployment”, which was organized by Ian Wong (National Instruments). The panelists were Amitava Ghosh (Nokia), Erik G. Larsson (Linköping University), Ali Yazdan (Facebook), Raghu Rao (Xilinx), and Shugong Xu (Shanghai University).
No common definition
The first discussion item was the definition of Massive MIMO. While everyone agreed that the main characteristic is that the number of controllable antenna elements is much larger than the number of spatially multiplexed users, the panelists put forward different additional requirements. The industry prefers to call everything with at least 32 antennas for Massive MIMO, irrespective of whether the beamforming is constructed from codebook-based feedback, grid-of-beams, or by exploiting uplink pilots and TDD reciprocity. This demonstrates that Massive MIMO is becoming a marketing term, rather than a well-defined technology. In contrast, academic researchers often have more restrictive definitions; Larsson suggested to specifically include the TDD reciprocity approach in the definition. This is because it is the robust and overhead-efficient way to acquire channel state information (CSI), particularly for non-line-of-sight users; see Myth 3 in our magazine paper. This narrow definition clearly rules out FDD operation, as pointed out by a member of the audience. Personally, I think that any multi-user MIMO implementation that provides performance similar to the TDD-reciprocity-based approach deserves the Massive MIMO branding, but we should not let marketing people use the name for any implementation just because it has many antennas.
Important use cases
The primary use cases for Massive MIMO in sub-6 GHz bands are to improve coverage and spectral efficiency, according to the panel. Great improvements in spectral efficiency have been demonstrated by prototyping, but the panelist agreed that these should be seen as upper bounds. We should not expect to see more than 4x improvements over LTE in the first deployments, according to Ghosh. Larger gains are expected in later releases, but there will continue to be a substantial gap between the average spectral efficiency observed in real cellular networks and the peak spectral efficiency demonstrated by prototypes. Since Massive MIMO achieves its main spectral efficiency gains by multiplexing of users, we might not need a full-blown Massive MIMO implementation today, when there are only one or two simultaneously active users in most cells. However, the networks need to evolve over time as the number of active users per cell grows.
In mmWave bands, the panel agreed that Massive MIMO is mainly for extending coverage. The first large-scale deployments of Massive MIMO will likely aim at delivering fixed wireless broadband access and this must be done in the mmWave bands; there is too little bandwidth in sub-6 GHz bands to deliver data rates that can compete with wired DSL technology.
Initial cost considerations
The deployment cost is a key factor that will limit the first generations of Massive MIMO networks. Despite all the theoretic research that has demonstrated that each antenna branch can be built using low-resolution hardware, when there are many antennas, one should not forget the higher out-of-band radiation that it can lead to. We need to comply with the spectral emission masks – spectrum is incredibly expensive so a licensee cannot accept interference from adjacent bands. For this reason, several panelists from the industry expressed the view that we need to use similar hardware components in Massive MIMO as in contemporary base stations and, therefore, the hardware cost grows linearly with the number of antennas. On the other hand, Larsson pointed out that the futuristic devices that you could see in James Bond movies 10 years ago can now be bought for $100 in any electronic store; hence, when the technology evolves and the economy of scale kicks in, the cost per antenna should not be more than in a smartphone.
A related debate is the one between analog and digital beamforming. Several panelists said that analog and hybrid approaches will be used to cut cost in the first deployments. To rely on analog technology is somewhat weird in an age when everything is becoming digital, but Yazdan pointed out that it is only a temporary solution. The long-term vision is to do fully digital beamforming, even in mmWave bands.
Another implementation challenge that was discussed is the acquisition of CSI for mobile users. This is often brought up as a showstopper since hybrid beamforming methods have such difficulties – it is like looking at a running person in a binocular and trying to follow the movement. This is a challenging issue for any radio technology, but if you rely on uplink pilots for CSI acquisition, it will not be harder than in a system of today. This has also been demonstrated by measurements.
Open problems
The panel was asked to describe the most important open problems in the Massive MIMO area, from a deployment perspective. One obvious issue, which we called the “grand question” in a previous paper, is to provide better support for Massive MIMO in FDD.
The control plane and MAC layer deserve more attention, according to Larsson. Basic functionalities such as ACK/NACK feedback is often ignored by academia, but incredibly important in practice.
The design of “cell-free” densely distributed Massive MIMO systems also deserve further attention. Connecting all existing antennas together to perform joint transmission seems to be the ultimate approach to wireless networks. Although there is no practical implementation yet, Yazdan stressed that deploying such networks might actually be more practical than it seems, given the growing interest in C-RAN technology.
10 years from now
I asked the panel what will be the status of Massive MIMO in 10 years from now. Rao predicted that we will have Massive MIMO everywhere, just as all access point supports small-scale MIMO today. Yazdan believed that the different radio technology (e.g., WiFi, LTE, NR) will converge into one interconnected system, which also allows operators to share hardware. Larsson thinks that over the next decade many more people will have understood the fundamental benefits of utilizing TDD and channel reciprocity, which will have a profound impact on the regulations and spectrum allocation.
]]>Unfortunately, there was not enough time for me to answer all the questions that I received, so I had to answer many of them afterwards. I have gathered ten questions and my answers below. I can also announce that I will give another Massive MIMO webinar in January 2018 and it will also be followed by a Q/A session.
1. What are the differences between 4G and 5G that will affect how Massive MIMO can be implemented?
The channel estimation must be implemented in the right way (i.e., exploiting uplink pilots and channel reciprocity) to obtain sufficiently accurate channel state information (CSI) to perform spatial multiplexing of many users, otherwise the inter-user interference will eliminate most of the gains. Accurate CSI is hard to achieve within the 4G standard, although there are several Massive MIMO field trials for TDD LTE that show promising results. However, if 5G is designed properly, it will support Massive MIMO from scratch, while in 4G it will always be an add-on that must to adhere to the existing air interface.
2. How easy it is to deploy MIMO antennas on the current infrastructure?
Generally speaking, we can reuse the current infrastructure when deploying Massive MIMO, which is why operators show much interest in the technology. You upgrade the radio base stations but keep the same backhaul infrastructure and core network. However, since Massive MIMO supports much higher data rates, some of the backhaul connections might also need to be upgraded to deliver these rates.
3. What are the most suitable channel models for Massive MIMO?
I recommend the channel model that was developed in the MAMMOET project. It is a refinement of the COST 2100 model that takes particular phenomena of having large antenna arrays into account. Check out Deliverable D1.2 from that project.
4. For planar arrays, what is the height to width ratio that gives the highest performance?
You typically need more antennas in the horizontal direction (width) than in the vertical direction (height), because the angular variations between users is larger in the horizontal domain. For example, the array might cover a horizontal sector of 120-180 degrees, while the users’ elevation angles might only differ by a few tens of degrees. This is the reason that 8-antenna LTE base stations use linear arrays in the horizontal direction.
There is no optimal answer to the question. It depends on the deployment scenario. If you have high-rise buildings, users at different floors can have rather different elevation angles (it can differ up to 90 degrees) and you can benefit more from having many antennas in the vertical direction. If all users have almost the same elevation angle, it is preferable to have many antennas in the horizontal direction. These things are further discussed in Sections 7.3 and 7.4 in my new book.
5. What are the difficulties we face in deploying Massive MIMO in FDD systems?
The difficulty is to acquire channel state information at the base station for the frequency band used in the downlink, since it is very resource-demanding to send downlink pilots from a large array; particularly, if you want to spatially multiplex many users. This is an important but challenging problem that researchers have been working on since the 1990s. You can read more about it in Myth 3 and the grand question in the paper Massive MIMO: ten myths and one grand question.
6. Do you believe that there is a value in coordinated resource allocation schemes for Massive MIMO?
Yes, but the resource allocation in Massive MIMO is different from conventional systems. Scheduling might not be so important, since you can multiplex many users spatially, but pilot assignment and power allocation are important aspects that must be addressed. I refer to these things as spatial resource allocation. You can read more about this in Sections 7.1 and 7.2 in my new book, but as you can see from those sections, there are many open problems to be solved.
7. What is channel hardening and what implications does it have on the frequency allocation (in OFDMA networks, for example)?
Channel hardening means that the effective channel after beamforming is almost constant so that the communication link behaves as if there is no small-scale fading. A consequence is that all frequency subcarriers provide almost the same channel quality to a user. Regarding channel assignment, since you can multiplex many tens of users spatially in Massive MIMO, you can assign the entire bandwidth (all subcarriers) to every user; there is no need to use OFDMA to allocate orthogonal frequency resources to the users.
8. Is it practical to estimate the channel for each subcarrier in an OFDM system?
To limit the pilot overhead, you typically place pilots only on a small subset of the subcarriers. The distance between the pilots in the frequency domain can be selected based on how frequency-selective the channels are; if a user has L strong channel taps, it is sufficient to send pilots on L subcarriers, even if you many more subcarriers than that. Based on the received pilot signals, one can either estimate the channels on every subcarrier or estimate the channels on some of them and interpolate to get estimates on the remaining subcarriers.
9. How sensitive are the Massive MIMO spectral efficiency gains to TDD frame synchronization?
If you consider an OFDM system, then timing synchronization mismatches that are smaller than the cyclic prefix can basically be ignored. This is the case in TDD LTE systems and will not change when considering Massive MIMO systems that are implemented using OFDM. However, the synchronization across cells will not be perfect. The implications are investigated in a recent paper.
10. How does the higher computational complexity and delay in Massive MIMO processing affect the system performance?
I used to think that the computational complexity would be a bottleneck, but it turns out that it is not a big deal since all of the operations are standard (i.e., matrix multiplications and matrix inversions). For example, the circuit that was developed at Lund University shows that MIMO detection and precoding for a 20 MHz channel can be implemented very efficiently and only consumes a few mW.
]]>This article is an interview with Prof. Liesbet Van der Perre who was the scientific leader of the project.
In 2012, when you began to draft the project proposal, Massive MIMO was not a popular topic. Why did you initiate the work?
– Theoretically and conceptually it seemed so interesting that it would be a pity not to work on it. The main goal of the MAMMOET project was to make conceptual progress towards a spectrally and energy efficient system and to raise the confidence level by demonstrating a practical hardware implementation. We also wanted to make channel measurements to see if they would confirm what has been seen in theory.
It seems the project partners had a clear vision from the beginning?
– It was actually very easy to write this proposal because everyone was on the same wavelength and knew what we wanted to achieve. We were all eager to start the project and learn from each other. This is quite unique and explains why the project delivered much more than promised. The fact that the team got along very well has also laid the fundament for further research collaborations.
What were the main outcomes of the project?
– We learned a lot on how things change when going from small to large arrays. New channel models are required to capture the new behaviors. We are used to that high-precision hardware is needed, but all the sudden this is not true when drastically increasing the number of antennas. You can then use low-resolution hardware and simple processing, which is very different from conventional MIMO implementation.
Some of the big conceptual differences in massive MIMO turned out to be easier to solve than expected, while some things were more problematic than foreseen. For example, it is difficult to connect all the signals together. You need to do part of the processing distributive to avoid this problem. Synchronization also turned out to be a bottleneck. If we would have known that from the start, we could have designed the testbed differently, but we thought that the channel estimation and MIMO processing would be the challenging part.
What was the most rewarding aspect of leading this project?
– The cross-fertilization of people was unique. We brought people with different background and expertise together in a room to identify the crucial problems in massive MIMO and find new solutions. For example, we realized early that interference will be a main problem and that zero-forcing processing is needed, although matched filtering was popular at the time. By carefully analyzing the zero-forcing complexity, we could show that it was almost negligible compared to other necessary processing and we later demonstrated zero-forcing in real-time at the testbed. This was surprising for many people who thought that massive MIMO would be impossible to implement since 8×8 MIMO systems are terribly complex, but many things can be simplified in massive MIMO. Looking back, it might seem that the outcomes were obvious, but these are things you don’t know until you have gone through the process.
What are the big challenges that remains?
– An important challenge is how to integrate massive MIMO into a network. We assumed that there are many users and we can all give them the same time-frequency resources, but the channels and traffic are not always suitable for that. How should we decide which users to put together? We used an LTE-like frame structure, but it is important to design a frame structure that is well-suited for massive MIMO and real traffic.
There are many tradeoffs and degrees-of-freedom when designing massive MIMO systems. Would you use the technology to provide very good cell coverage or to boost small-cell capacity? Instead of delivering fiber to homes, we could use massive MIMO with very many antennas for spatial multiplexing of fixed wireless connections. Alternatively, in a mobile situation, we might not multiplex so many users. Optimizing massive MIMO for different scenarios is something that remains.
We made a lot of progress on the digital processing side in MAMMOET, while on the analog side we mainly came up with the specifications. We also did not work on the antenna design since, theoretically, it does not matter which antennas you use, but in practice it does.
All the deliverables and publications in the MAMMOET project can be accessed online: https://mammoet-project.eu
The deliverables contain a lot information related to use cases, requirements, channel modeling, signal processing algorithms, algorithmic implementation, and hardware implementation. Some of the results can found in the research literature, but far from everything.
Note: The author of this article worked in the MAMMOET project, but did not take part in the drafting of the proposal.
]]>I have been think that it can go either way – it is in the hands of marketing people. Advanced Wifi routers have been marketed with MIMO functionality for some years, but the impact is limited since most people get their routers as part of their internet subscriptions instead of buying them separately. Hence, the main question is: will handset manufactures and telecom operators start using the MIMO term when marketing products to end customers?
Maybe we have the answer because Sprint, an American telecom operator, is currently marketing their 2018 deployment of new LTE technology by talking publicly about “Massive MIMO”. As I wrote back in March, Sprint and Ericsson were to conduct field tests in the second half of 2017. Results from the tests conducted in Seattle, Washington and Plano, Texas, have now been described in a press release. The tests were carried at a carrier frequency in the 2.5 GHz band using TDD mode and an Ericsson base station with 64 transmit/receive antennas. It is fair to call this Massive MIMO, although 64 antennas is in the lower end of the interval that I would call “massive”.
The press release describes “peak speeds of more than 300 Mbps using a single 20 MHz channel”, which corresponds to a spectral efficiency of 15 bit/s/Hz. That is certainly higher than you can get in legacy LTE networks, but it is less than some previous field tests.
Hence, when the Sprint COO of Technology, Guenther Ottendorfer, describes their Massive MIMO deployment with the words “You ain’t seen nothing yet”, I hope that this means that we will see network deployments with substantially higher spectral efficiencies than 15 bit/s/Hz in the years to come.
Several videos about the field test in Seattle have recently appeared. The first one demonstrates that 100 people can simultaneously download a video, which is not possible in legacy networks. Since the base station has 64 antennas, the 100 users are probably served by a combination of spatial multiplexing and conventional orthogonal time-frequency multiplexing.
The second video provides some more technical details about the setup used in the field test.
]]>Until recently, a more rigorous analysis was unavailable. Some weeks ago the authors of this paper argued, that all things considered, the use of superimposed pilots does not offer any appreciable gains for practically interesting use cases. The analysis was based on a capacity-bounding approach for finite numbers of antennas and finite channel coherence, but it assumed the most basic form of signal processing for detection and decoding.
There still remains some hope of seeing improvements, by implementing more advanced signal processing, like zero-forcing, multicell MMSE decoding, or iterative decoding algorithms, perhaps involving “turbo” information exchange between the demodulator, channel estimation, and detector. It will be interesting to follow future work by these two groups of authors to understand how large improvements (if any) superimposed pilots eventually can give.
There are, at least, two general lessons to learn here. First, that performance predictions based on asymptotics can be misleading in practically relevant cases. (I have discussed this issue before.) The best way to perform analysis is to use rigorous capacity lower bounds, or possibly, in isolated cases of interest, link-level simulations with channel coding (for which, as it turns out, capacity bounds are a very good proxy). Second, more concretely, that while it may be tempting, to superimpose-squeeze multiple symbols into the same time-frequency-space resource, once all sources of impairments (channel estimation errors, interference) are accurately accounted for, the gains tend to evaporate. (It is for the same reason that NOMA offers no substantial gains in MIMO systems – a topic that I may return to at a later time.)
]]>I sometimes get the question “Isn’t Massive MIMO just MU-MIMO with more antennas?” My answer is no, because the key benefit of Massive MIMO over conventional MU-MIMO is not only about the number of antennas. Marzetta’s Massive MIMO concept is the way to deliver the theoretical gains of MU-MIMO under practical circumstances. To achieve this goal, we need to acquire accurate channel state information, which in general can only be done by exploiting uplink pilots and channel reciprocity in TDD mode. Thanks to the channel hardening and favorable propagation phenomena, one can also simplify the system operation in Massive MIMO.
Six key differences between conventional MU-MIMO and Massive MIMO are provided below.
Conventional MU-MIMO | Massive MIMO | |
Relation between number of BS antennas (M) and users (K) | M ≈ K and both are small (e.g., below 10) | M ≫ K and both can be large (e.g., M=100 and K=20). |
Duplexing mode | Designed to work with both TDD and FDD operation | Designed for TDD operation to exploit channel reciprocity |
Channel acquisition | Mainly based on codebooks with set of predefined angular beams | Based on sending uplink pilots and exploiting channel reciprocity |
Link quality after precoding/combining | Varies over time and frequency, due to frequency-selective and small-scale fading | Almost no variations over time and frequency, thanks to channel hardening |
Resource allocation | The allocation must change rapidly to account for channel quality variations | The allocation can be planned in advance since the channel quality varies slowly |
Cell-edge performance | Only good if the BSs cooperate | Cell-edge SNR increases proportionally to the number of antennas, without causing more inter-cell interference |
Footnote: TDD stands for time-division duplex and FDD stands for frequency-division duplex.
]]>There are several lessons to learn here. First, that peer review may be the best system we know, but it isn’t perfect: disturbingly, it is often affected by incompetence and bias. Second, notwithstanding the first, that many paper rejections are probably also grounded in genuine misunderstandings: writing well takes a lot of experience, and a lot of hard, dedicated work. Finally, and perhaps most significantly, that persistence is really an essential component of success.
]]>One answer is that beamforming and precoding are two words for exactly the same thing, namely to use an antenna array to transmit one or multiple spatially directive signals.
Another answer is that beamforming can be divided into two categories: analog and digital beamforming. In the former category, the same signal is fed to each antenna and then analog phase-shifters are used to steer the signal emitted by the array. This is what a phased array would do. In the latter category, different signals are designed for each antenna in the digital domain. This allows for greater flexibility since one can assign different powers and phases to different antennas and also to different parts of the frequency bands (e.g., subcarriers). This makes digital beamforming particularly desirable for spatial multiplexing, where we want to transmit a superposition of signals, each with a separate directivity. It is also beneficial when having a wide bandwidth because with fixed phases the signal will get a different directivity in different parts of the band. The second answer to the question is that precoding is equivalent to digital beamforming. Some people only mean analog beamforming when they say beamforming, while others use the terminology for both categories.
A third answer is that beamforming refers to a single-user transmission with one data stream, such that the transmitted signal consists of one main-lobe and some undesired side-lobes. In contrast, precoding refers to the superposition of multiple beams for spatial multiplexing of several data streams.
A fourth answer is that beamforming refers to the formation of a beam in a particular angular direction, while precoding refers to any type of transmission from an antenna array. This definition essentially limits the use of beamforming to line-of-sight (LoS) communications, because when transmitting to a non-line-of-sight (NLoS) user, the transmitted signal might not have a clear angular directivity. The emitted signal is instead matched to the multipath propagation so that the multipath components that reach the user add constructively.
A fifth answer is that precoding consists of two parts: choosing the directivity (beamforming) and choosing the transmit power (power allocation).
I used to use the word beamforming in its widest meaning (i.e., the first answer), as can be seen in my first book on the topic. However, I have since noticed that some people have a more narrow or specific interpretation of beamforming. Therefore, I nowadays prefer only talking about precoding. In Massive MIMO, I think that precoding is the right word to use since what I advocate is a fully digital implementation, where the phases and powers can be jointly designed to achieve high capacity through spatial multiplexing of many users, in both NLoS and LOS scenarios.
]]>The sub-6 GHz spectrum is particularly useful to provide network coverage, since the pathloss and channel coherence time are relatively favorable at such frequencies (recall that the coherence time is inversely proportional to the carrier frequency). Massive MIMO at sub-6 GHz spectrum can increase the efficiency of highly loaded cells, by upgrading the technology at existing base stations. In contrast, the huge available bandwidths in mmWave bands can be utilized for high-capacity services, but only over short distances due to the severe pathloss and high noise power (which is proportional to the bandwidth). Massive MIMO in mmWave bands can thus be used to improve the link budget.
Six key differences between sub-6 GHz and mmWave operation are provided below:
Sub-6 GHz | mmWave | |
Deployment scenario | Macro cells with support for high user mobility | Small cells with low user mobility |
Number of simultaneous users per cell | Up to tens of users, due to the large coverage area | One or a few users, due to the small coverage area |
Main benefit from having many antennas | Spatial multiplexing of tens of users, since the array gain and ability to separate users spatially lead to great spectral efficiency | Beamforming to a single user, which greatly improves the link budget and thereby extends coverage |
Channel characteristics | Rich multipath propagation | Only a few propagation paths |
Spectral efficiency and bandwidth | High spectral efficiency due to the spatial multiplexing, but small bandwidth | Low spectral efficiency due to few users, large pathloss, and large noise power, but large bandwidth |
Transceiver hardware | Fully digital transceiver implementations are feasible and have been prototyped | Hybrid analog-digital transceiver implementations are needed, at least in the first products |
Since Massive MIMO was initially proposed by Tom Marzetta for sub-6 GHz applications, I personally recommend to use the “Massive MIMO” name only for that use case. One can instead say “mmWave Massive MIMO” or just “mmWave” when referring to multi-antenna technologies for mmWave bands.
]]>One option is to let the signal power become times larger than in a single-antenna reference scenario. The increase in SNR will then lead to higher data rates for the users. The gain can be anything from bit/s/Hz to almost negligible, depending on how interference-limited the system is. Another option is to utilize the array gain to reduce the transmit power, to maintain the same SNR as in the reference scenario. The corresponding power saving can be very helpful to improve the energy efficiency of the system.
In the uplink, with single-antenna user terminals, we can choose between these options. However, in the downlink, we might not have a choice. There are strict regulations on the permitted level of out-of-band radiation in practical systems. Since Massive MIMO uses downlink precoding, the transmitted signals from the base station have a stronger directivity than in the single-antenna reference scenario. The signal components that leak into the bands adjacent to the intended frequency band will then also be more directive.
For example, consider a line-of-sight scenario where the precoding creates an angular beam towards the intended user (as illustrated in the figure below). The out-of-band radiation will then get a similar angular directivity and lead to larger interference to systems operating in adjacent bands, if their receivers are close to the user (as the victim in the figure below). To counteract this effect, our only choice might be to reduce the downlink transmit power to keep the worst-case out-of-band radiation constant.
Another alternative is that the regulations are made more flexible with respect to precoded transmissions. The probability that a receiver in an adjacent band is hit by an interfering out-of-band beam, such that the interference becomes times larger than in the reference scenario, reduces with an increasing number of antennas since the beams are narrower. Hence, if one can allow for beamformed out-of-band interference if it occurs with sufficiently low probability, the array gain in Massive MIMO can still be utilized to increase the SNRs. A third option will then be to (partially) reduce the transmit power to also allow for relaxed linearity requirements of the hardware.
These considerations are nicely discussed in an overview article that appeared on ArXiv earlier this year. There are also two papers that analyze the impact of out-of-bound radiation in Massive MIMO: Paper 1 and Paper 2.
]]>Many researchers have analyzed pilot contamination over the six years that have passed since Marzetta uncovered its importance in Massive MIMO systems. We now have a quite good understanding of how to mitigate pilot contamination. There is a plethora of different approaches, whereof many have complementary benefits. If pilot contamination is not mitigated, it will both reduce the array gain and create coherent interference. Some approaches mitigate the pilot interference in the channel estimation phase, while some approaches combat the coherent interference caused by pilot contamination. In this post, I will try to categorize the approaches and point to some key references.
Interference-rejecting precoding and combining
Pilot contamination makes the estimate of a desired channel correlated with the channel from pilot-sharing users in other cells. When these channel estimates are used for receive combining or transmit precoding, coherent interference typically arise. This is particularly the case if maximum ratio processing is used, because it ignores the interference. If multi-cell MMSE processing is used instead, the coherent interference is rejected in the spatial domain. In particular, recent work from Björnson et al. (see also this related paper) have shown that there is no asymptotic rate limit when using this approach, if there is just a tiny amount of spatial correlation in the channels.
Data-aided channel estimation
Another approach is to “decontaminate” the channel estimates from pilot contamination, by using the pilot sequence and the uplink data for joint channel estimation. This have the potential of both improving the estimation quality (leading to a stronger desired signal) and reducing the coherent interference. Ideally, if the data is known, data-aided channel estimation increase the length of the pilot sequences to the length of the uplink transmission block. Since the data is unknown to the receiver, semi-blind estimation techniques are needed to obtain the channel estimates. Ngo et al. and Müller et al. did early works on pilot decontamination for Massive MIMO. Recent work has proved that one can fully decontaminate the estimates, as the length of the uplink block grows large, but it remains to find the most efficient semi-blind decontamination approach for practical block lengths.
Pilot assignment and dimensioning
Which subset of users that share a pilot sequence makes a large difference, since users with large pathloss differences and different spatial channel correlation cause less contamination to each other. Recall that higher estimation quality both increases the gain of the desired signal and reduces the coherent interference. Increasing the number of orthogonal pilot sequences is a straightforward way to decrease the contamination, since each pilot can be assigned to fewer users in the network. The price to pay is a larger pilot overhead, but it seems that a reuse factor of 3 or 4 is often suitable from a sum rate perspective in cellular networks. The joint spatial division and multiplexing (JSDM) provides a basic methodology to take spatial correlation into account in the pilot reuse patterns.
Alternatively, pilot sequences can be superimposed on the data sequences, which gives as many orthogonal pilot sequences as the length of the uplink block and thereby reduces the pilot contamination. This approach also removes the pilot overhead, but it comes at the cost of causing interference between pilot and data transmissions. It is therefore important to assign the right fraction of power to pilots and data. A hybrid pilot solution, where some users have superimposed pilots and some have conventional pilots, may bring the best of both worlds.
If two cells use the same subset of pilots, the exact pilot-user assignment can make a large difference. Cell-center users are generally less sensitive to pilot contamination than cell-edge users, but finding the best assignment is a hard combinatorial problem. There are heuristic algorithms that can be used and also an optimization framework that can be used to evaluate such algorithms.
Multi-cell cooperation
A combination of network MIMO and macro diversity can be utilized to turn the coherent interference into desired signals. This approach is called pilot contamination precoding by Ashikhmin et al. and can be applied in both uplink and downlink. In the uplink, the base stations receive different linear combinations of the user signals. After maximum ratio combining, the coefficients in the linear combinations approach deterministic numbers as the number of antennas grow large. These numbers are only non-zero for the pilot-sharing users. Since the macro diversity naturally creates different linear combinations, the base stations can jointly solve a linear system of equations to obtain the transmitted signals. In the downlink, all signals are sent from all base stations and are precoded in such a way that the coherent interference sent from different base stations cancel out. While this is a beautiful approach for mitigating the coherent interference, it relies heavily on channel hardening, favorable propagation, and i.i.d. Rayleigh fading. It remains to be shown if the approach can provide performance gains under more practical conditions.
]]>Have you reflected over what the purpose of asymptotic analysis is? The goal is not that we should design and deploy wireless networks with a nearly infinite number of antennas. Firstly, it is physically impossible to do that in a finite-sized world, irrespective of whether you let the array aperture grow or pack the antennas more densely. Secondly, the conventional channel models break down, since you will eventually receive more power than you transmitted. Thirdly, the technology will neither be cost nor energy efficient, since the cost/energy grows linearly with , while the delivered system performance either approaches a finite limit or grows logarithmically with .
It is important not to overemphasize the implications of asymptotic results. Consider the popular power-scaling law which says that one can use the array gain of Massive MIMO to reduce the transmit power as and still approach a non-zero asymptotic rate limit. This type of scaling law has been derived for many different scenarios in different papers. The practical implication is that you can reduce the transmit power as you add more antennas, but the asymptotic scaling law does not prescribe how much you should reduce the power when going from, say, 40 to 400 antennas. It all depends on which rates you want to deliver to your users.
The figure below shows the transmit power in a scenario where we start with 1 W for a single-antenna transmitter and then follow the asymptotic power-scaling law as the number of antennas increases. With antennas, the transmit power per antenna is just 1 mW, which is unnecessarily low given the fact that the circuits in the corresponding transceiver chain will consume much more power. By using higher transmit power than 1 mW per antenna, we can deliver higher rates to the users, while barely effecting the total power of the base station.
Similarly, there is a hardware-scaling law which says that one can increase the error vector magnitude (EVM) proportionally to and approach a non-zero asymptotic rate limit. The practical implication is that Massive MIMO systems can use simpler hardware components (that cause more distortion) than conventional systems, since there is a lower sensitivity to distortion. This is the foundation on which the recent works on low-bit ADC resolutions builds (see this paper and references therein).
Even the importance of the coherent interference, caused by pilot contamination, is easily overemphasized if one only considers the asymptotic behavior. For example, the finite rate limit that appears when communicating over i.i.d. Rayleigh fading channels with maximum ratio or zero-forcing processing is only approached in practice if one has around one million antennas.
In my opinion, the purpose of asymptotic analysis is not to understand the asymptotic behaviors themselves, but what the asymptotics can tell us about the performance at practical number of antennas. Here are some usages that I think are particularly sound:
Some form of Massive MIMO will appear in 5G, but to get a well-designed system we need to focus more on demonstrating and optimizing the performance in practical scenarios (e.g., the key 5G use cases) and less on pure asymptotic analysis.
]]>With i.i.d. Rayleigh fading, the channel gain has an Erlang-distribution (this is a scaled distribution) and the channel direction is uniformly distributed over the unit sphere in . The channel gain and the channel direction are also independent random variables, which is why this is a spatially uncorrelated channel model.
One of the key benefits of i.i.d. Rayleigh fading is that one can compute closed-form rate expressions, at least when using maximum ratio or zero-forcing processing; see Fundamentals of Massive MIMO for details. These expressions have an intuitive interpretation, but should be treated with care because practical channels are not spatially uncorrelated. Firstly, due to the propagation environment, the channel vector is more probable to point in some directions than in others. Secondly, the antennas have spatially dependent antenna patterns. Both factors contribute to the fact that spatial channel correlation always appears in practice.
One of the basic properties of spatial channel correlation is that the base station array receives different average signal power from different spatial directions. This is illustrated in Figure 1 below for a uniform linear array with 100 antennas, where the angle of arrival is measured from the boresight of the array.
As seen from Figure 1, with i.i.d. Rayleigh fading the average received power is equally large from all directions, while with spatially correlated fading it varies depending on in which direction the base station applies its receive beamforming. Note that this is a numerical example that was generated by letting the signal come from four scattering clusters located in different angular directions. Channel measurements from Lund University (see Figure 4 in this paper) show how the spatial correlation behaves in practical scenarios.
Correlated Rayleigh fading is a tractable way to model a spatially correlation channel vector: , where the covariance matrix is also the correlation matrix. It is only when is a scaled identity matrix that we have spatially uncorrelated fading. The eigenvalue distribution determines how strongly spatially correlated the channel is. If all eigenvalues are identical, then is a scaled identity matrix and there is no spatial correlation. If there are a few strong eigenvalues that contain most of the power, then there is very strong spatial correlation and the channel vector is very likely to be (approximately) spanned by the corresponding eigenvectors. This is illustrated in Figure 2 below, for the same scenario as in the previous figure. In the considered correlated fading case, there are 20 eigenvalues that are larger than in the i.i.d. fading case. These eigenvalues contain 94% of the power, while the next 20 eigenvalues contain 5% and the smallest 60 eigenvalues only contain 1%. Hence, most of the power is concentrated to a subspace of dimension . The fraction of strong eigenvalues is related to the fraction of the angular interval from which strong signals are received. This relation can be made explicit in special cases.
One example of spatially correlated fading is when the correlation matrix has equal diagonal elements and non-zero off-diagonal elements, which describe the correlation between the channel coefficients of different antennas. This is a reasonable model when deploying a compact base station array in tower. Another example is a diagonal correlation matrix with different diagonal elements. This is a reasonable model when deploying distributed antennas, as in the case of cell-free Massive MIMO.
Finally, a more general channel model is correlated Rician fading: , where the mean value represents the deterministic line-of-sight channel and the covariance matrix determines the properties of the fading. The correlation matrix can still be used to determine the spatial correlation of the received signal power. However, from a system performance perspective, the fraction between the power of the line-of-sight path and the scattered paths can have a large impact on the performance as well. A nearly deterministic channel with a large -factor provide more reliable communication, in particular since under correlated fading it is only the large eigenvalues of that contributes to the channel hardening (which otherwise provides reliability in Massive MIMO).
]]>Looking back, I am always wondering where the term “Massive MIMO” actually comes from. When we wrote our paper, the terms “large-scale antenna systems (LSAS)” or simply “large-scale MIMO” were commonly used to refer to base stations with very large antenna arrays, and I do not recall what made us choose our title.
The Google Trends Chart for “Massive MIMO” above clearly shows that interest in this topic started roughly at the time Tom Marzetta’s seminal paper was published, although the term itself does not appear in it at all. If anyone has an idea or reference where the term “Massive MIMO” was first used, please feel free to write this in the comment field.
In case you have not read our paper, let me first explain the key question it tries to answer. Marzetta showed in his paper that the simplest form of linear receive combining and transmit precoding, namely maximum ratio combining (MRC) and transmission (MRT), respectively, achieve an asymptotic spectral efficiency (when the number of antennas goes to infinity) that is only limited by coherent interference caused by user equipments (UEs) using the same pilot sequences for channel training (see the previous blog post on pilot contamination). All non-coherent interference such as noise, channel gain uncertainty due to estimation errors, and interference magically vanishes thanks to the strong law of large numbers and favorable propagation. Intrigued by this beautiful result, we wanted to know what happens for a large but finite number of antennas . Clearly, MRC/MRT are not optimal in this regime, and we wanted to quantify how much can be gained by using more advanced combining/precoding schemes. In other words, our goal was to figure out how many antennas could be “saved” by computing a matrix inverse, which is the key ingredient of the more sophisticated schemes, such as MMSE combining or regularized zero-forcing (RZF) precoding. Moreover, we wanted to compute how much of the asymptotic spectral efficiency can be achieved with antennas. Please read our paper if you are interested in our findings.
What is interesting to notice is that we (and many other researchers) had always taken the following facts about Massive MIMO for granted and repeated them in numerous papers without further questioning:
We have recently uploaded a new paper on Arxiv which proves that all of these “facts” are incorrect and essentially artifacts from using simplistic channel models and suboptimal precoding/combining schemes. What I find particularly amusing is that we have come to this result by carefully analyzing the asymptotic performance of the multicell MMSE receive combiner that I mentioned but rejected in the 2011 Allerton paper. To understand the difference between the widely used single-cell MMSE (S-MMSE) combining and the (not widely used) multicell MMSE (M-MMSE) combining, let us look at their respective definitions for a base station located in cell :
where and denote the number of cells and UEs per cell, is the estimated channel matrix from the UEs in cell , and and are the covariance matrices of the channel and the channel estimation errors of UE in cell , respectively. While M-MMSE combining uses estimates of the channels from all UEs in all cells, the simpler S-MMSE combining uses only channel estimates from the UEs in the own cell. Importantly, we show that Massive MIMO with M-MMSE combining has unlimited capacity while Massive MIMO with S-MMSE combining has not! This behavior is shown in the following figure:
In the light of this new result, I wish that we would not have made the following remark in our 2011 Allerton paper:
“Note that a BS could theoretically estimate
all channel matrices (…) to further
improve the performance. Nevertheless, high path loss to
neighboring cells is likely to render these channel estimates unreliable and the potential performance gains are expected to be marginal.”
We could not have been more wrong about it!
In summary, although we did not understand the importance of M-MMSE combining in 2011, I believe that we were asking the right questions. In particular, the consideration of individual channel covariance matrices for each UE has been an important step for the analysis of Massive MIMO systems. A key lesson that I have learned from this story for my own research is that one should always question fundamental assumptions and wisdom.
]]>I also touched the (for sub-5 GHz bands somewhat controversial) topic of hybrid beamforming, and whether that would reduce the required amount of hardware.
A question from the audience was whether the use of antennas with larger physical aperture (i.e., intrinsic directivity) would change the conclusions. The answer is no: the use of directional antennas is more or less equivalent to sectorization. The problem is that to exploit the intrinsic gain, the antennas must a priori point “in the right direction”. Hence, in the array, only a subset of the antennas will be useful when serving a particular terminal. This impacts both the channel gain (reduced effective aperture) and orthogonality (see, e.g, Figure 7.5 in this book).
There was also a stimulating panel discussion afterwards. One question discussed in the panel concerned the necessity, or desirability, of using multiple terminal antennas at mmWave. Looking only at the link budget, base station antennas could be traded against terminal antennas – however, that argument neglects the inevitably lost orthogonality, and furthermore it is not obvious how beam-finding/tracking algorithms will perform (millisecond coherence time at pedestrian speeds!). Also, obviously, the comparison I presented is extremely simplistic – to begin with, the line-of-sight scenario is extremely favorable for mmWaves (blocking problems), but also, I entirely neglected polarization losses. Solely any attempts to compensate for these problems are likely to require multiple terminal antennas.
Other topics touched in the panel were the viability of Massive MIMO implementations. Perhaps the most important comment in this context made was by Ian Wong of National Instruments: “In the past year, we’ve actually shown that [massive MIMO] works in reality … To me, the biggest development is that the skeptics are being quiet.” (Read more about that here.)
]]>The first step towards reproducibility is to describe the simulation procedure in such detail that another researcher can repeat the simulation, but a major effort is typically needed to reimplement everything. The second step is to make the simulation code publicly available, so that any scientist can review it and easily reproduce the results. While the first step is mandatory for publishing a scientific study, there is a movement towards open science that would make also the second step a common practice.
I understand that some researchers are skeptical towards sharing their simulation code, in fear of losing their competitive advantage towards other research groups. My personal principle is to not share any code until the research study is finished and the results have been accepted for publication in a full-length journal. After that, I think that the society benefits the most if other researcher can focus on improving my and others’ research, instead of spending excessive amount of time on reimplementing known algorithms. I also believe that the primary competitive advantage in research is the know-how and technical insights, while the simulation code is of secondary importance.
On my GitHub page, I have published Matlab code packages that reproduces the simulation results in one book, one book chapter, and more than 15 peer-reviewed articles. Most of these publications are related to MIMO or Massive MIMO. I see many benefits from doing this:
1) It increases the credibility of my research group’s work;
2) I write better code when I know that other people will read it;
3) Other researchers can dedicate their time into developing new improved algorithms and compare them with my baseline implementations;
4) Young scientists may learn how to implement a basic simulation environment by reading the code.
I hope that other Massive MIMO researchers will also make their simulation code publicly available. Maybe you have already done that? In that case, please feel free to write a comment to this post with a link to your code.
]]>The author argues that, under these circumstances, the MNOs have little to gain from investing in 5G technology. Most customers are not asking for any of the envisaged 5G services and will not be inclined to pay extra for them. Webb even compares the situation with the prisoner’s dilemma: the MNOs would benefit the most from not investing in 5G, but they will anyway make investments to avoid a situation where customers switch to a competitor that has invested in 5G. The picture that Webb paints of 5G is rather pessimistic compared to a recent McKinsey report, where the more cost-efficient network operation is described as a key reason for MNOs to invest in 5G.
The author provides a refreshing description of the market for cellular communications, which is important in a time when the research community focuses more on broad 5G visions than on the customers’ actual needs. The book is thus a recommended read for 5G researchers, since we should all ask ourselves if we are developing a technology that tackles the right unsolved problems.
Webb does not only criticize the economic incentives for 5G deployment, but also the 5G visions and technologies in general. The claims are in many cases reasonable; for example, Webb accurately points out that most of the 5G performance goals are overly optimistic and probably only required by a tiny fraction of the user base. He also accurately points out that some “5G applications” already have a wireless solution (e.g., indoor IoT devices connected over WiFi) or should preferably be wired (e.g., ultra-reliable low-latency applications such as remote surgery).
However, it is also in this part of the book that the argumentation sometimes falls short. For example, Webb extrapolates a recent drop in traffic growth to claim that the global traffic volume will reach a plateau in 2027. It is plausible that the traffic growth rate will reduce as a larger and larger fraction of the global population gets access to wireless high-speed connections. But one should bear in mind that we have witnessed an exponential growth in wireless communication traffic for the past century (known as Cooper’s law), so this trend can just as well continue for a few more decades, potentially at a lower growth rate than in the past decade.
Webb also provides a misleading description of multiuser MIMO by claiming that 1) the antenna arrays would be unreasonable large at cellular frequencies and 2) the beamforming requires complicated angular beam-steering. These are two of the myths that we dispelled in the paper “Massive MIMO: Ten myths and one grand question” last year. In fact, testbeds have demonstrated that massive multiuser MIMO is feasible in lower frequency bands, and particularly useful to improve the spectral efficiency through coherent beamforming and spatial multiplexing of users. Reciprocity-based beamforming is a solution for mobile and cell-edge users, for which angular beam-steering indeed is inefficient.
The book is not as pessimistic about the future as it might seem from this review. Webb provides an alternative vision for future wireless communications, where consistent connectivity rather than higher peak rates is the main focus. This coincides with one of the 5G performance goals (i.e., 50 Mbit/s everywhere), but Webb advocates an extensive government-supported deployment of WiFi instead of 5G technology. The use WiFi is not a bad idea; I personally consume relatively little cellular data since WiFi is available at home, at work, and at many public locations in Sweden. However, the cellular services are necessary to realize the dream of consistent connectivity, particularly outdoors and when in motion. This is where a 5G cellular technology that delivers better coverage and higher data rates at the cell edge is highly desirable. Reciprocity-based Massive MIMO seems to be the solution that can deliver this, thus Webb would have had a stronger case if this technology was properly integrated into his vision.
In summary, the combination of 5G Massive MIMO for wide-area coverage and WiFi for local-area coverage might be the way to truly deliver consistent connectivity.
]]>Impressive, and important.
Granted, this number does not include the complexity of FFTs, sampling rate conversions, and several other (non-insignificant) tasks; however, it does include the bulk of the “Massive-MIMO”-specific digital processing. The design exploits a number of tricks and Massive-MIMO specific properties: diagonal dominance of the channel Gramian, in particular, in sufficiently favorable propagation.
When I started work on Massive MIMO in 2009, the common view held was that the technology would be infeasible because of computational complexity. Particularly, the sheer idea of performing zero-forcing processing in real time was met with, if not ridicule, extreme skepticism. We quickly realized, however, that a reasonable DSP implementation would require no more than some ten Watt. While that is a small number in itself, it turned out to be an overestimate by orders of magnitude!
I spoke with some of the lead inventors of the chip, to learn more about its design. First, the architectures for decoding and for precoding differ a bit. While there is no fundamental reason for why this has to be so, one motivation is the possible use of nonlinear detectors on uplink. (The need for such detectors, for most “typical” cellular Massive MIMO deployments, is not clear – but that is another story.)
Second, and more importantly, the scalability of the design is not clear. While the complexity of the matrix operations themselves scale fast with the dimension, the precision in the arithmetics may have to be increased as well – resulting in a much-faster-than-cubically overall complexity scaling. Since Massive MIMO operates at its best when multiplexing to many tens of terminals (or even thousands, in some applications), significant challenges remain for the future. That is good news for circuit engineers, algorithm designers, and communications theoreticians alike. The next ten years will be exciting.
]]>The basic presumption of TDD/reciprocity-based Massive MIMO is that all activity, comprising the transmission of uplink pilots, uplink data and downlink data, takes place inside of a coherence interval:
At fixed mobility, in meter/second, the dimensionality of the coherence interval is proportional to the wavelength, because the Doppler spread is proportional to the carrier frequency.
In a single cell, with max-min fairness power control (for uniform quality-of-service provision), the sum-throughput of Massive MIMO can be computed analytically and is given by the following formula:
In this formula,
This formula assumes independent Rayleigh fading, but the general conclusions remain under other models.
The factor that pre-multiplies the logarithm depends on .
The pre-log factor is maximized when . The maximal value is , which is proportional to , and therefore proportional to the wavelength. Due to the multiplication $B T_c$, one can get same pre-log factor using a smaller bandwidth by instead increasing the wavelength, i.e., reducing the carrier frequency. At the same time, assuming appropriate scaling of the number of antennas, , with the number of terminals, , the quantity inside of the logarithm is a constant.
Concluding, the sum spectral efficiency (in b/s/Hz) easily can double for every doubling of the wavelength: a megahertz of bandwidth at 100 MHz carrier is ten times more worth than a megahertz of bandwidth at a 1 GHz carrier. So while there is more bandwidth available at higher carriers, the potential multiplexing gains are correspondingly smaller.
In this example,
all three setups give the same sum-throughput, however, the throughput per terminal is vastly different.
The Bristol team has now worked with British Telecom and conducted trials at their site in Adastral Park, Suffolk, in more demanding user scenarios. In the indoor exhibition hall trial, 24 user streams were multiplexed over a 20 MHz bandwidth, resulting in a sum rate of 2 Gbit/s or a spectral efficiency of 100 bit/s/Hz/cell.
Several outdoor experiments were also conducted, which included user mobility. We are looking forward to see more details on these experiments, but in the meantime one can have a look at the following video:
Update: We have corrected the bandwidth number in this post.
]]>It is the physics that make it difficult to provide good coverage. The transmitted signals spread out and only a tiny fraction of the transmitted power reaches the receive antenna (e.g., one part of a billion parts). In cellular networks, the received signal power reduces roughly as the propagation distance to the power of four. This results in the following data rate coverage behavior:
This figure considers an area covered by nine base stations, which are located at the middle of the nine peaks. Users that are close to one of the base stations receive the maximum downlink data rate, which in this case is 60 Mbit/s (e.g., spectral efficiency 6 bit/s/Hz over a 10 MHz channel). As a user moves away from a base station, the data rate drops rapidly. At the cell edge, where the user is equally distant from multiple base stations, the rate is nearly zero in this simulation. This is because the received signal power is low as compared to the receiver noise.
What can be done to improve the coverage?
One possibility is to increase the transmit power. This is mathematically equivalent to densifying the network, so that the area covered by each base station is smaller. The figure below shows what happens if we use 100 times more transmit power:
There are some visible differences as compared to Figure 1. First, the region around the base station that gives 60 Mbit/s is larger. Second, the data rates at the cell edge are slightly improved, but there are still large variations within the area. However, it is no longer the noise that limits the cell-edge rates—it is the interference from other base stations.
The inter-cell interference remains even if we would further increase the transmit power. The reason is that the desired signal power as well as the interfering signal power grow in the same manner at the cell edge. Similar things happen if we densify the network by adding more base stations, as nicely explained in a recent paper by Andrews et al.
Ideally, we would like to increase only the power of the desired signals, while keeping the interference power fixed. This is what transmit precoding from a multi-antenna array can achieve; the transmitted signals from the multiple antennas at the base station add constructively only at the spatial location of the desired user. More precisely, the signal power is proportional to M (the number of antennas), while the interference power caused to other users is independent of M. The following figure shows the data rates when we go from 1 to 100 antennas:
Figure 3 shows that the data rates are increased for all users, but particularly for those at the cell edge. In this simulation, everyone is now guaranteed a minimum data rate of 30 Mbit/s, while 60 Mbit/s is delivered in a large fraction of the coverage area.
In practice, the propagation losses are not only distant-dependent, but also affected by other large-scale effects, such as shadowing. The properties described above remain nevertheless. Coherent precoding from a base station with many antennas can greatly improve the data rates for the cell edge users, since only the desired signal power (and not the interference power) is increased. Higher transmit power or smaller cells will only lead to an interference-limited regime where the cell-edge performance remains to be poor. A practical challenge with coherent precoding is that the base station needs to learn the user channels, but reciprocity-based Massive MIMO provides a scalable solution to that. That is why Massive MIMO is the key technology for delivering ubiquitous connectivity in 5G.
]]>There are no specific details of the experimental setup or implementation in any of these press releases, so we cannot tell how well the systems perform compared to a baseline TDD Massive MIMO setup. Maybe this is just a rebranding of the FDD multiuser MIMO functionality in LTE, evolved with a few extra antenna ports. It is nonetheless exciting to see that several major telecom companies want to associate themselves with the Massive MIMO technology and hopefully it will result in something revolutionary in the years to come.
Efficient FDD implementation of multiuser MIMO is a longstanding challenge. The reason is the difficulty in estimating channels and feeding back accurate channel state information (CSI) in a resource-efficient manner. Many researchers have proposed methods to exploit channel parameterizations, such as angles and spatial correlation, to simplify the CSI acquisition. This might be sufficient to achieve an array gain, but the ability to also mitigate interuser interference is less certain and remains to be demonstrated experimentally. Since 85% of the LTE networks use FDD, we have previously claimed that making Massive MIMO work well in FDD is critical for the practical success and adoption of the technology.
We hope to see more field trials of Massive MIMO in FDD, along with details of the measurement setups and evaluations of which channel acquisition schemes that are suitable in practice. Will FDD Massive MIMO be exclusive for static users, whose channels are easily estimated, or can anyone benefit from it in 5G?
Update: Blue Danube Systems has released a press release that is also describing trials of FDD Massive MIMO as well. Many companies apparently want to be “first” with this technology for LTE.
]]>To look into this, consider a communication system operating over a bandwidth of Hz. By assuming an additive white Gaussian noise channel, the capacity becomes
where W is the transmit power, is the channel gain, and W/Hz is the power spectral density of the noise. The term inside the logarithm is referred to as the signal-to-noise ratio (SNR).
Since the bandwidth appears in front of the logarithm, it might seem that the capacity grows linearly with the bandwidth. This is not the case since also the noise term in the SNR also grows linearly with the bandwidth. This fact is illustrated by Figure 1 below, where we consider a system that achieves an SNR of 0 dB at a reference bandwidth of 20 MHz. As we increase the bandwidth towards 2 GHz, the capacity grows only modestly. Despite the 100 times more bandwidth, the capacity only improves by , which is far from the that a linear increase would give.
The reason for this modest capacity growth is the fact that the SNR reduces inversely proportional to the bandwidth. One can show that
The convergence to this limit is seen in Figure 1 and is relatively fast since for .
To achieve a linear capacity growth, we need to keep the SNR fixed as the bandwidth increases. This can be achieved by increasing the transmit power proportionally to the bandwidth, which entails using more power when operating over a wider bandwidth. This might not be desirable in practice, at least not for battery-powered devices.
An alternative is to use beamforming to improve the channel gain. In a Massive MIMO system, the effective channel gain is , where is the number of antennas and is the gain of a single-antenna channel. Hence, we can increase the number of antennas proportionally to the bandwidth to keep the SNR fixed.
Figure 2 considers the same setup as in Figure 1, but now we also let either the transmit power or the number of antennas grow proportionally to the bandwidth. In both cases, we achieve a capacity that grows proportionally to the bandwidth, as we initially hoped for.
In conclusion, to make efficient use of more bandwidth we require more transmit power or more antennas at the transmitter and/or receiver. It is worth noting that these requirements are purely due to the increase in bandwidth. In addition, for any given bandwidth, the operation at millimeter-wave frequencies requires much more transmit power and/or more antennas (e.g., additional constant-gain antennas or one constant-aperture antenna) just to achieve the same SNR as in a system operating at conventional frequencies below 5 GHz.
]]>Physics has given us the reciprocity principle. It should be exploited in wireless system design.
]]>A good system design definitely must not ignore pilot interference. While it is easily removed “on the average” through greater-than-one reuse, the randomness present in wireless communications – especially the shadow fading – will occasionally cause a few terminals to be severely hit by pilot contamination and bring down their performance. This is problematic whenever we are concerned about the provision of uniformly great service in the cell – and that is one of the principal selling arguments for Massive MIMO. Notwithstanding, the impact of pilot contamination can be reduced significantly in practice by appropriate pilot reuse and judicious power control. (Chapters 5-6 in Fundamentals of Massive MIMO gives many details.)
A more fundamental question is whether pilot contamination could be entirely overcome: Does there exist an upper bound on capacity that saturates as the number of antennas, M, is increased indefinitely? Some have speculated that it cannot; much in line with known capacity upper bounds for cellular base station cooperation. While this question may be of more academic than practical interest, it has long been open except for in some trivial special cases: If the channels of two terminals lie in non-overlapping subspaces and Bayesian channel estimation is used, the channel estimates will not be contaminated; capacity grows as log(M) when M increases without bound.
A much deeper result is established in this recent paper: the subspaces of the channel covariances may overlap, yet capacity grows as log(M). Technically, a Rayleigh fading with spatial correlation is assumed, and the correlation matrices for the contaminating terminals must only be linearly independent as M goes to infinity (exact conditions in the paper). In retrospect, this is not unreasonable given the substantial a priori knowledge exploited by the Bayesian channel estimator, but I found it amazing how weak the required conditions on the correlation matrices are. It remains unclear whether the result generalizes to the case of a growing number of interferers: letting the number of antennas go to infinity and then growing the network is not the same thing as taking an “infinite” (scalable) network and increasing the number of antennas. But this paper elegantly and rigorously answers a long-standing question that has been the subject of much debate in the community – and is a recommended read for anyone interested in the fundamental limits of Massive MIMO.
]]>For millimeter wave, the huge bandwidth was identified as the key benefit. Rappaport predicted that 30 GHz of bandwidth would be available in 5 years time, while other panelists made a more conservative prediction of 15-20 GHz in 10 years time. With such a huge bandwidth, a spectral efficiency of 1 bit/s/Hz is sufficient for an access point to deliver tens of Gbit/s to a single user. The panelists agreed that much work remains on millimeter wave channel modeling and the design of circuits for that can deliver the theoretical performance without huge losses. The lack of robustness towards blockage and similar propagation phenomena is also a major challenge.
For Massive MIMO, the straightforward support of user mobility, multiplexing of many users, and wide-area coverage were mentioned as key benefits. A 10x-20x gain in per-cell spectral efficiency, with performance guarantees for every user, was another major factor. Since these gains come from spatial multiplexing of users, rather than increasing the spectral efficiency per user, a large number of users are required to achieve these gains in practice. With a small number of users, the Massive MIMO gains are modest, so it might not be a technology to deploy everywhere. Another drawback is the limited amount of spectrum in the range below 5 GHz, which limits the peak data rates that can be achieved per user. The technology can deliver tens of Mbit/s, but maybe not any Gbit/s per user.
Although the purpose of the panel was to debate the two 5G candidate technologies, I believe that the panelists agree that these technologies have complementary benefits. Today, you connect to WiFi when it is available and switch to cellular when the WiFi network cannot support you. Similarly, I imagine a future where you will enjoy the great data rates offered by millimeter wave, when you are covered by such an access point. Your device will then switch seamlessly to a Massive MIMO network, operating below 5 GHz, to guarantee ubiquitous connectivity when you are in motion or not covered by any millimeter wave access points.
]]>I have argued before that in a mobile access environment, no more than a few hundred of antennas per base station will be useful. In an environment without significant mobility, however, the answer is different. In [1, Sec. 6.1], one case study establishes the feasibility of providing (fixed) wireless broadband service to 3000 homes, using a single isolated base station with 3200 antennas (zero-forcing processing and max-min power control). The power consumption of the associated digital signal processing is estimated in [1, homework #6.6] to less than 500 Watt. The service of this many terminals is enabled by the long channel coherence (50 ms in the example).
Is this as massive as MIMO could ever get? Perhaps not. Conceivably, there will be environments with even larger channel coherence. Consider, for example, an outdoor city square with no cars or other traffic – hence no significant mobility. Eventually only measurements can determine the channel coherence, but assuming for the sake of argument 200 ms by 400 kHz, gives room for training of 40,000 terminals (assuming no more than 50% of resources are spent on training). Multiplexing these terminals would require at least 40,000 antennas, which would, at 3 GHz and half wavelength-spacing, occupy an area of 10 x 10 meters (say with a rectangular array for the sake of argument) – easily integrated onto the face of a skyscraper.
Is this science fiction or will we be seeing this application in the future? The application is fully feasible, with today’s circuit technology, and does not violate known physical or information theoretic constraints. Machine-to-machine, IoT, or perhaps virtual-reality-type applications may eventually create the desirability, or need, to build extreme Massive MIMO.
]]>The METIS research project has identified twelve test cases for 5G connectivity. One of these is the “Dense urban information society”, which is
“…concerned with the connectivity required at any place and at any time by humans in dense urban environments. We here consider both the traffic between humans and the cloud, and also direct information exchange between humans or with their environment. The particular challenge lies in the fact that users expect the same quality of experience no matter whether they are at their workplace, enjoying leisure activities such as shopping, or being on the move on foot or in a vehicle.”
Source: METIS, deliverable D1.1 “Scenarios, requirements and KPIs for 5G mobile and wireless system”
Hence, the challenge is to provide ubiquitous connectivity in urban areas, where there will be massive user loads in the future: up to 200,000 devices per km^{2} is predicted by METIS. In their test case, each device requests one data packet per minute, which should be transferred within one second. Hence, there is on average up to 200,000/60 = 3,333 users active per km^{2} at any given time.
This large number of users is a challenge that Massive MIMO is particularly well-suited for. One of the key benefits of the Massive MIMO technology is the high spectral efficiency that it achieves by spatial multiplexing of tens of user per cell. Suppose, for example, that the cells are deployed in a hexagonal pattern with a base station in each cell center, as illustrated in the figure. How many simultaneously active users will there be per cell in the dense urban information society? That depends on the area of a cell. An inter-site distance (ISD) of 0.25 km is common in contemporary urban deployments. In this case, one can show that the area covered by each cell is √3×ISD^{2}/2 = 0.05 km^{2}.
The number of active users per cell is then obtained by multiplying the cell area with the user density. Three examples are provided in the table below:
10^{3} users/km^{2} | 10^{4} users/km^{2} | 10^{5} users/km^{2} | |
Total number of users per cell | 54 | 540 | 5400 |
Average active users per cell | 0.9 | 9 | 90 |
Recall that 1/60 of the total number of users are active simultaneously, in the urban information society test case. This gives the numbers in the second row of the table.
From this table, notice that there will be tens of simultaneously active users per cell, when the user density is above 10,000 per km^{2}. This is a number substantially smaller than the 200,000 per km^{2} predicted by the METIS project. Hence, there will likely be many future urban deployment scenarios with sufficiently many users to benefit from Massive MIMO.
A fraction of these users can (and probably will) be offloaded to WiFi-like networks, maybe operating at mmWave frequencies. But since local-area networks provide only patchy coverage, it is inevitable that many users and devices will rely on the cellular networks to achieve ubiquitous connectivity, with the uniform quality-of-service everywhere.
In summary, Massive MIMO is what we need to realize the dream of ubiquitous connectivity in the dense urban information society.
Merouane Debbah, Vice-President of the Huawei France R&D center, confirms to the Massive MIMO blog that this spectral efficiency was achieved in the downlink, using TDD and exploiting channel reciprocity. This comes as no surprise, as it is not plausible that this performance could be sustained with FDD-style CSI feedback.
Another piece of evidence, that the theoretical predictions of Massive MIMO performance are for real.
]]>The multi-user MIMO concept, then called space-division multiple access (SDMA), was picked up by the industry in the nineties. For example, Ericsson made field-trials with antenna arrays in GSM systems, which were reported in “Adaptive antennas for GSM and TDMA systems” from 1999. ArrayComm filed an SDMA patent in 1991 and made trials in the nineties. In cooperation with the manufacturer Kyocera, this resulted in commercial deployment of SDMA as an overlay to the TDD-based Personal Handy-phone System (PHS).
Given this history, why isn’t multi-user MIMO a key ingredient in current cellular networks? I think there are several answers to this question:
Why is multi-user MIMO considered a key 5G technology? Basically because the three issues described above have now changed substantially. There is a renewed interest in TDD, with successful cellular deployments in Asia and WiFi being used everywhere. Massive MIMO is the refined form of multi-user MIMO, where the TDD operation enables channel estimation in any propagation environment, the many antennas allow for low-complexity signal processing, and the scalable protocols are suitable for large-scale deployments. The technology can nowadays be implemented using power-efficient off-the-shelf radio-frequency transceivers, as demonstrated by testbeds. Massive MIMO builds upon a solid ground of information theory, which shows how to communicate efficiently under practical impairments such as interference and imperfect channel knowledge.
Maybe most importantly, spatial multiplexing is needed to manage the future data traffic growth. This is because deploying many more base stations or obtaining much more spectrum are not viable options if we want to maintain network coverage—small cells at the street-level are easily shadowed by buildings and mm-wave frequency signals do not propagate well though walls. In 5G networks, a typical cellular base station might have tens of active users at a time, which is a sufficient number to benefit from the great spectral efficiency offered by Massive MIMO.
]]>“Massive MIMO is a useful and scalable version of Multiuser MIMO. There are three fundamental distinctions between Massive MIMO and conventional Multiuser MIMO. First, only the base station learns G. Second, M is typically much larger than K, although this does not have to be the case. Third, simple linear signal processing is used both on the uplink and on the downlink. These features render Massive MIMO scalable with respect to the number of base station antennas, M.”
(Note: M is the number of antennas, K is the number of users, and G denotes the channel matrix).
In [2], we find another definition:
“Massive MIMO is a multi-user MIMO system with M antennas and K users per BS. The system is characterized by M ≫ K and operates in TDD mode using linear uplink and downlink processing.”
Both are nice general definitions that cover most systems that commonly are called “Massive MIMO”. However, their generality also makes them vague and they fail to pinpoint the essence of Massive MIMO. Here, is my take on a slightly more precise definition:
“Massive MIMO is a multi-user MIMO system that (1) serves multiple users through spatial multiplexing over a channel with favorable propagation in time-division duplex and (2) relies on channel reciprocity and uplink pilots to obtain channel state information.”
Now, you might ask: So what is then “favorable propagation”? We need a second definition:
“The propagation is said to be favorable when users are mutually orthogonal in some practical sense.”
Again you ask: in what practical sense? If h∈ℂᴹ is the channel vector to one user and g∈ℂᴹ the channel vector to another, the users are said to be orthogonal if hᴴg = 0. Unfortunately, this is never true in a real system. It can be practically true, however, if we say that users are practically orthogonal when hᴴg/(‖h‖‖g‖) has mean zero and a variance that is much smaller than one.
There we go: a more-or-less rigorous definition of Massive MIMO. Note that this definition does not require the number of users to be small in any sense. So, to the big question: How many antennas does a base station need to be “massive”? The answer is given for the i.i.d. Rayleigh fading channel in the following curve that shows how the users’ channels become practically orthogonal as the number of antennas is increased.
In that application, I don’t think so. Here is why.
What ultimately limits Massive MIMO is mobility: no more than half of the coherence time-bandwidth product should be occupied by pilot transmission activities. (This is the “half and half rule”.) In macro-cellular at 3 GHz, with highway mobility we may have on the order of 200 kHz x 1 millisecond coherence; that is 200 samples. With pilot reuse of 3 (that practically does away with pilot contamination), we could, then ultimately learn the channel to some 30 simultaneously served terminals – assuming mutually orthogonal pilots. Once the number of base station antennas M reaches beyond twice this number, with some margin – say M=100, the spectral efficiency grows logarithmically with M. That means, even doubling M yields only a 3dB effective SINR increase, that is a single extra bit per second/Hz per terminal. Beyond M=100 or M=200, it may not be worth it. Multiple antennas are only truly useful if they are used to multiplex, and mobility limits the amount of multiplexing we can perform.
So why not quadruple the number of antennas for additional coverage? May not be worth it either. Going from M=200 to M=2000 gives 10 dB – that pays for a 75% range extension, or, alternatively, a tenth of the losses incurred by an energy-saving coated window glass.
In stationary environments, the story is different – a topic that we will be returning to.
]]>suggest the use of 1-bit ADCs in Massive MIMO base station receivers. Important studies of a concept, that offers great potential for cost saving and simplification of transceiver hardware.
Granted, much lower resolution will be sufficient in Massive MIMO than in conventional MIMO, but will one bit be sufficient? These papers indicate that the price to pay is not insignificant: the number of antennas may have to be doubled in some cases. Also, while the use of symbol-sampled models as in these studies may give correct order-of-magnitude estimates of capacity, much future work appears to remain to understand the effects of digital channelization/prefiltering and sampling rate conversion if 1-bit frontends are going to be used.
]]>The interesting part starts at 2:48, with the terminals onboard cars. While it has been contested whether Massive MIMO can work in mobility (because of channel aging) this experiment confirms that it does — as predicted by theory for a long time. In fact, at 3.7 GHz carrier and with a slot length of 0.5 ms, the maximum permitted mobility (assuming a two-ray model with Nyquist sampling, and a factor-of-two design margin) is over 140 km/h. So the experiment is probably still not close to the physical limits.
]]>