Category Archives: 5G

Field Tests of FDD Massive MIMO

Frequency-division duplex (FDD) operation of Massive MIMO in LTE is the topic of two press releases from January 2017. The first press release describes a joint field test carried out by ZTE and China Telecom. It claims three-fold improvements in per-cell spectral efficiency using standard LTE devices, but no further details are given. The second press release describes a field verification carried out by Huawei and China Unicom. The average data rate was 87 Mbit/s per user over a 20 MHz channel and was achieved using commercial LTE devices. This corresponds to a spectral efficiency of 4.36 bit/s/Hz per user. A sum rate of 697 Mbit/s is also mentioned, from which one could guess that eight users were multiplexed (87•8=696).

Image source: Huawei

There are no specific details of the experimental setup or implementation in any of these press releases, so we cannot tell how well the systems perform compared to a baseline TDD Massive MIMO setup. Maybe this is just a rebranding of the FDD multiuser MIMO functionality in LTE, evolved with a few extra antenna ports. It is nonetheless exciting to see that several major telecom companies want to associate themselves with the Massive MIMO technology and hopefully it will result in something revolutionary in the years to come.

Efficient FDD implementation of multiuser MIMO is a longstanding challenge. The reason is the difficulty in estimating channels and feeding back accurate channel state information (CSI) in a resource-efficient manner. Many researchers have proposed methods to exploit channel parameterizations, such as angles and spatial correlation, to simplify the CSI acquisition. This might be sufficient to achieve an array gain, but the ability to also mitigate interuser interference is less certain and remains to be demonstrated experimentally. Since 85% of the LTE networks use FDD, we have previously claimed that making Massive MIMO work well in FDD is critical for the practical success and adoption of the technology.

We hope to see more field trials of Massive MIMO in FDD, along with details of the measurement setups and evaluations of which channel acquisition schemes that are suitable in practice. Will FDD Massive MIMO be exclusive for static users, whose channels are easily estimated, or can anyone benefit from it in 5G?

Update: Blue Danube Systems has released a press release that is also describing trials of FDD Massive MIMO as well. Many companies apparently want to be “first” with this technology for LTE.

More Bandwidth Requires More Power or Antennas

The main selling point of millimeter-wave communications is the abundant bandwidth available in such frequency bands; for example, 2 GHz of bandwidth instead of 20 MHz as in conventional cellular networks. The underlying argument is that the use of much wider bandwidths immediately leads to much higher capacities, in terms of bit/s, but the reality is not that simple.

To look into this,  consider a communication system operating over a bandwidth of $B$ Hz. By assuming an additive white Gaussian noise channel, the capacity becomes

     $$ C = B \log_2 \left(1+\frac{P \beta}{N_0 B} \right)$$

where $P$ W is the transmit power, $\beta$ is the channel gain, and $N_0$ W/Hz is the power spectral density of the noise. The term $(P \beta)/(N_0 B)$ inside the logarithm is referred to as the signal-to-noise ratio (SNR).

Since the bandwidth $B$ appears in front of the logarithm, it might seem that the capacity grows linearly with the bandwidth. This is not the case since also the noise term $N_0 B$ in the SNR also grows linearly with the bandwidth. This fact is illustrated by Figure 1 below, where we consider a system that achieves an SNR of 0 dB at a reference bandwidth of 20 MHz. As we increase the bandwidth towards 2 GHz, the capacity grows only modestly. Despite the 100 times more bandwidth, the capacity only improves by $1.44\times$, which is far from the $100\times$ that a linear increase would give.

Figure 1: Capacity as a function of the bandwidth, for a system with an SNR of 0 dB over a reference bandwidth of 20 MHz. The transmit power is fixed.

The reason for this modest capacity growth is the fact that the SNR reduces inversely proportional to the bandwidth. One can show that

     $$ C \to \frac{P \beta}{N_0}\log_2(e) \quad \textrm{as} \,\, B \to \infty.$$

The convergence to this limit is seen in Figure 1 and is relatively fast since $\log_2(1+x) \approx x \log_2(e)$ for $0 \leq x \leq 1$.

To achieve a linear capacity growth, we need to keep the SNR $(P \beta)/(N_0 B)$ fixed as the bandwidth increases. This can be achieved by increasing the transmit power $P$ proportionally to the bandwidth, which entails using $100\times$ more power when operating over a $100\times$ wider bandwidth. This might not be desirable in practice, at least not for battery-powered devices.

An alternative is to use beamforming to improve the channel gain. In a Massive MIMO system, the effective channel gain is $\beta = \beta_1 M$, where $M$ is the number of antennas and $\beta_1$ is the gain of a single-antenna channel. Hence, we can increase the number of antennas proportionally to the bandwidth to keep the SNR fixed.

Figure 2: Capacity as a function of the bandwidth, for a system with an SNR of 0 dB over a reference bandwidth of 20 MHz with one antenna. The transmit power (or the number of antennas) is either fixed or grows proportionally to the bandwidth.

Figure 2 considers the same setup as in Figure 1, but now we also let either the transmit power or the number of antennas grow proportionally to the bandwidth. In both cases, we achieve a capacity that grows proportionally to the bandwidth, as we initially hoped for.

In conclusion, to make efficient use of more bandwidth we require more transmit power or more antennas at the transmitter and/or receiver. It is worth noting that these requirements are purely due to the increase in bandwidth. In addition, for any given bandwidth, the operation at millimeter-wave frequencies requires much more transmit power and/or more antennas (e.g., additional constant-gain antennas or one constant-aperture antenna) just to achieve the same SNR as in a system operating at conventional frequencies below 5 GHz.

Massive MIMO Trials in LTE Networks

Massive MIMO is often mentioned as a key 5G technology, but could it also be exploited in currently standardized LTE networks? The ZTE-Telefónica trials that were initiated in October 2016 shows that this is indeed possible. The press release from late last year describes the first results. For example, the trial showed improvements in network capacity and cell-edge data rates of up to six times, compared to traditional LTE.

The Massive MIMO blog has talked with Javier Lorca Hernando at Telefónica to get further details. The trials were carried out at the Telefónica headquarters in Madrid. A base station with 128 antenna ports was deployed at the rooftop of one of their buildings and the users were located in one floor of the central building, approximately 100 m from the base station. The users basically had cell-edge conditions, due to the metallized glass and multiple metallic constructions surrounding them.

The uplink and downlink data transmissions were carried out in the 2.6 GHz band. Typical Massive MIMO time-division duplex (TDD) operation was considered, where the uplink detection and downlink precoding is based on uplink pilots and channel reciprocity. The existing LTE sounding reference signals (SRSs) were used as uplink pilots. The reciprocity-based precoding was implemented by using LTE’s transmission mode 8 (TM8),  which supports any type of precoding.  Downlink pilots were used for link adaptation and demodulation purposes.

It is great to see that Massive MIMO can be also implemented in LTE systems. In this trial, the users were static and relatively few, but it will be exciting to see if the existing LTE reference signals will also enable Massive MIMO communications for a multitude of mobile users!

Update: ZTE has carried out similar experiments in cooperation with Smartfren in Indonesia. Additional field trials are mentioned in the comments to this post.

Which Technology Can Give Greater Value?

The IEEE GLOBECOM conference, held in Washington D.C. this week, featured many good presentations and exhibitions. One well-attended event was the industry panel “Millimeter Wave vs. Below 5 GHz Massive MIMO: Which Technology Can Give Greater Value?“, organized by Thomas Marzetta and Robert Heath. They invited one team of Millimeter Wave proponents (Theodore Rappaport, Kei Sakaguchi, Charlie Zhang) and one team of Massive MIMO proponents (Chih-Lin I, Erik G. Larsson, Liesbet Van der Perre) to debate the pros and cons of the two 5G technologies.


For millimeter wave, the huge bandwidth was identified as the key benefit. Rappaport predicted that 30 GHz of bandwidth would be available in 5 years time, while other panelists made a more conservative prediction of 15-20 GHz in 10 years time. With such a huge bandwidth, a spectral efficiency of 1 bit/s/Hz is sufficient for an access point to deliver tens of Gbit/s to a single user. The panelists agreed that much work remains on millimeter wave channel modeling and the design of circuits for that can deliver the theoretical performance without huge losses. The lack of robustness towards blockage and similar propagation phenomena is also a major challenge.

For Massive MIMO, the straightforward support of user mobility, multiplexing of many users, and wide-area coverage were mentioned as key benefits. A 10x-20x gain in per-cell spectral efficiency, with performance guarantees for every user, was another major factor. Since these gains come from spatial multiplexing of users, rather than increasing the spectral efficiency per user, a large number of users are required to achieve these gains in practice. With a small number of users, the Massive MIMO gains are modest, so it might not be a technology to deploy everywhere. Another drawback is the limited amount of spectrum in the range below 5 GHz, which limits the peak data rates that can be achieved per user. The technology can deliver tens of Mbit/s, but maybe not any Gbit/s per user.

Although the purpose of the panel was to debate the two 5G candidate technologies, I believe that the panelists agree that these technologies have complementary benefits. Today, you connect to WiFi when it is available and switch to cellular when the WiFi network cannot support you. Similarly, I imagine a future where you will enjoy the great data rates offered by millimeter wave, when you are covered by such an access point. Your device will then switch seamlessly to a Massive MIMO network, operating below 5 GHz, to guarantee ubiquitous connectivity when you are in motion or not covered by any millimeter wave access points.

The Dense Urban Information Society

5G cellular networks are supposed to deal with many challenging communication scenarios where today’s cellular networks fall short.  In this post, we have a look at one such scenario, where Massive MIMO is key to overcome the challenges.

The METIS research project has identified twelve test cases for 5G connectivity. One of these is the “Dense urban information society”, which is

“…concerned with the connectivity required at any place and at any time by humans in dense urban environments. We here consider both the traffic between humans and the cloud, and also direct information exchange between humans or with their environment. The particular challenge lies in the fact that users expect the same quality of experience no matter whether they are at their workplace, enjoying leisure activities such as shopping, or being on the move on foot or in a vehicle.”

Source: METIS, deliverable D1.1 “Scenarios, requirements and KPIs for 5G mobile and wireless system

Hence, the challenge is to provide ubiquitous connectivity in urban areas, where there will be massive user loads in the future: up to  200,000 devices per km2 is predicted by METIS. In their test case, each device requests one data packet per minute, which should be transferred within one second. Hence, there is on average up to 200,000/60 = 3,333 users active per km2 at any given time.

Hexagonal cellular network, with adjacent cells having different colors for clarity.

This large number of users is a challenge that Massive MIMO is particularly well-suited for. One of the key benefits of the Massive MIMO technology is the high spectral efficiency that it achieves by spatial multiplexing of tens of user per cell. Suppose, for example, that the cells are deployed in a hexagonal pattern with a base station in each cell center, as illustrated in the figure. How many simultaneously active users will there be per cell in the dense urban information society? That depends on the area of a cell. An inter-site distance (ISD) of 0.25 km is common in contemporary urban deployments. In this case, one can show that the area covered by each cell is √3×ISD2/2 = 0.05 km2.


The number of active users per cell is then obtained by multiplying the cell area with the user density. Three examples are provided in the table below:

103 users/km2 104 users/km2 105 users/km2
Total number of users per cell 54 540 5400
Average active users per cell 0.9 9 90

Recall that 1/60 of the total number of users are active simultaneously, in the urban information society test case. This gives the numbers in the second row of the table.

From this table, notice that there will be tens of simultaneously active users per cell, when the user density is above 10,000 per km2. This is a number substantially smaller than the 200,000 per km2 predicted by the METIS project. Hence, there will likely be many future urban deployment scenarios with sufficiently many users to benefit from Massive MIMO.

A fraction of these users can (and probably will) be offloaded to WiFi-like networks, maybe operating at mmWave frequencies. But since local-area networks provide only patchy coverage, it is inevitable that many users and devices will rely on the cellular networks to achieve ubiquitous connectivity, with the uniform quality-of-service everywhere.

In summary, Massive MIMO is what we need to realize the dream of ubiquitous connectivity in the dense urban information society.

Cellular Multi-User MIMO: A Technology Whose Time has Come

Both the number of devices with wireless connection and the traffic that they generate have steadily grown since the early days of cellular communications. This continuously calls for improvements in the area capacity [bit/s/km2] of the networks. The use of adaptive antenna arrays was identified as a potential capacity-improving technology in the mid-eighties. An early uplink paper was “Optimum combining for indoor radio systems with multiple users” from 1987 by J. Winters at Bell Labs. An early downlink paper was “The performance enhancement of multibeam adaptive base-station antennas for cellular land mobile radio systems” by S. C. Swales et al. from 1990.

The multi-user MIMO concept, then called space-division multiple access (SDMA), was picked up by the industry in the nineties. For example, Ericsson made field-trials with antenna arrays in GSM systems, which were reported in “Adaptive antennas for GSM and TDMA systems” from 1999. ArrayComm filed an SDMA patent in 1991 and made trials in the nineties. In cooperation with the manufacturer Kyocera, this resulted in commercial deployment of SDMA as an overlay to the TDD-based Personal Handy-phone System (PHS).

Trial with a 12-element circular array by ArrayComm, in the late nineties.


Given this history, why isn’t multi-user MIMO a key ingredient in current cellular networks? I think there are several answers to this question:

  1. Most cellular networks use FDD spectrum. To acquire the downlink channels, the SDMA research first focused on angle-of-arrival estimation and later on beamforming codebooks. The cellular propagation environments turned out to be far more complicated than such system concepts easily can handle.
  2. The breakthroughs in information theory for multi-user MIMO happened in the early 2000s, thus there was no theoretical framework that the industry could use in the nineties to evaluate and optimize their multiple antenna concepts.
  3. In practice, it has been far easier to increase the area capacity by deploying more base stations and using more spectrum, rather than developing more advanced base station hardware. In current networks, there is typically zero, one or two users per cell active at a time, and then there is little need for multi-user MIMO.

Why is multi-user MIMO considered a key 5G technology? Basically because the three issues described above have now changed substantially. There is a renewed interest in TDD, with successful cellular deployments in Asia and WiFi being used everywhere. Massive MIMO is the refined form of multi-user MIMO, where the TDD operation enables channel estimation in any propagation environment, the many antennas allow for low-complexity signal processing, and the scalable protocols are suitable for large-scale deployments. The technology can nowadays be implemented using power-efficient off-the-shelf radio-frequency transceivers, as demonstrated by testbeds. Massive MIMO builds upon a solid ground of information theory, which shows how to communicate efficiently under practical impairments such as interference and imperfect channel knowledge.

Maybe most importantly, spatial multiplexing is needed to manage the future data traffic growth. This is because deploying many more base stations or obtaining much more spectrum are not viable options if we want to maintain network coverage—small cells at the street-level are easily shadowed by buildings and mm-wave frequency signals do not propagate well though walls. In 5G networks, a typical cellular base station might have tens of active users at a time, which is a sufficient number to benefit from the great spectral efficiency offered by Massive MIMO.

How Much does Massive MIMO Improve the Spectral Efficiency?

It is often claimed in the academic literature that Massive MIMO can greatly improve the spectral efficiency. What does it mean, qualitatively and quantitatively? This is what I will try to explain.

With spectral efficiency, we usually mean the sum spectral efficiency of the transmissions in a cell of a cellular network. It is measured in bit/s/Hz. If you multiply it with the bandwidth, you will get the cell throughput measured in bit/s. Since the bandwidth is a scarce resource, particularly at the frequencies below 5 GHz that are suitable for network coverage, it is highly desirable to improve the cell throughput by increasing the spectral efficiency rather than increasing the bandwidth.

A great way to improve the spectral efficiency is to simultaneously serve many user terminals in the cell, over the same bandwidth, by means of space division multiple access. This is where Massive MIMO is king. There is no doubt that this technology can improve the spectral efficiency. The question is rather “how much?”

Earlier this year, the joint experimental effort by the universities in Bristol and Lund demonstrated an impressive spectral efficiency of 145.6 bit/s/Hz, over a 20 MHz bandwidth in the 3.5 GHz band. The experiment was carried out in a single-cell indoor environment. Their huge spectral efficiency can be compared with 3 bit/s/Hz, which is the IMT Advanced requirement for 4G. The remarkable Massive MIMO gain was achieved by spatial multiplexing of data signals to 22 users using 256-QAM. The raw spectral efficiency is 176 bit/s/Hz, but 17% was lost for practical reasons. You can read more about this measurement campaign here:

256-QAM is generally not an option in cellular networks, due to the inter-cell interference and unfavorable cell edge conditions. Numerical simulations can, however, predict the practically achievable spectral efficiency. The figure below shows the uplink spectral efficiency for a base station with 200 antennas that serves a varying number of users. Interference from many tiers of neighboring cells is considered. Zero-forcing detection, pilot-based channel estimation, and power control that gives every user 0 dB SNR are assumed. Different curves are shown for different values of τc, which is the number of symbols per channel coherence interval. The curves have several peaks, since the results are optimized over different pilot reuse factors.

Spectral efficiency
Uplink spectral efficiency in a cellular network with 200 base station antennas.

From this simulation figure we observe that the spectral efficiency grows linearly with the number of users, for the first 30-40 users. For larger user numbers, the spectral efficiency saturates due to interference and limited channel coherence. The top value of each curve is in the range from 60 to 110 bit/s/Hz, which are remarkable improvements over the 3 bit/s/Hz of IMT Advanced.

In conclusion, 20x-40x improvements in spectral efficiency over IMT Advanced are what to expect from Massive MIMO.