Category Archives: 5G

Beyond the Cellular Paradigm: Cell-Free Architectures with Radio Stripes

I just finished giving an IEEE Future Networks Webinar on the topic of Cell-free Massive MIMO and radio stripes. The webinar is more technical than my previous popular-science video on the topic, but it can anyway be considered an overview on the basics and the implementation of the technology using radio stripes.

If you missed the chance to view the webinar live, you can access the recording and slides afterwards by following this link. The recording contains 42 minutes of presentation and 18 minutes of Q/A session. If your question was not answered during the session, please feel to ask it here on the blog instead.

Two Roles of Deep Learning in Massive MIMO

The hype around machine learning, particularly deep learning, has spread over the world. It is not only affecting engineers but also philosophers and government agencies, which try to predict what implications machine learning will have on our society.

When the initial excitement has subsided, I think machine learning will be an important tool that many engineers will find useful, alongside more classical tools such as optimization theory and Fourier analysis. I have spent the last two years thinking about what role deep learning can have in the field of communications. This field is rather different from other areas where deep learning has been successful: We deal with man-made systems that have designed based on rigorous theory to operate close to the fundamental performance limits, for example, the Shannon capacity. Hence, at first sight, there seems to be little room for improvement.

I have nevertheless identified two main applications of supervised deep learning in the physical layer of communication systems: 1) algorithmic approximation and 2) function inversion.

You can read about them in my recent survey paper “Two Applications of Deep Learning in the Physical Layer of Communication Systems” or watch the following video:

In the video, I’m exemplifying the applications through two recent papers where we applied deep learning to improve Massive MIMO systems. Here are links to those papers:

Trinh Van Chien, Emil Björnson, Erik G. Larsson, “Sum Spectral Efficiency Maximization in Massive MIMO Systems: Benefits from Deep Learning,” IEEE International Conference on Communications (ICC), 2019.

Özlem Tugfe Demir, Emil Björnson, “Channel Estimation in Massive MIMO under Hardware Non-Linearities: Bayesian Methods versus Deep Learning,” IEEE Open Journal of the Communications Society, 2020.

Revitalizing the Research on Wireless Communications in a New Decade

In the last decade, the research on wireless communications has been strongly focused on the development of 5G. Plenty of papers have started with sentences of the kind: “We consider X, which is a promising method that can greatly improve Y in 5G.” For example, X might be Massive MIMO and Y might be the spectral efficiency. We now know which physical-layer methods made it into the first release of the 5G standard, and which did not. It remains to be seen which methods will actually be used in practice and how large performance improvements 5G can deliver.

There is no doubt that the 5G research has been successful. However, it remains is to improve the developed methods to bridge the gap between the simplifying models and assumptions considered in academia and the practical conditions faced by the industry. Although new scientific papers appear on arXiv.org almost every day, few of them focus on these practically important aspects of the 5G development. Instead, minor variations on well-studied problems dominate and the models are the same simplified ones as ten years ago. We seem to be stuck in doing the same things that led to important advances at the beginning of the 2010s, although we have already solved most problems that can be solved using such simple models. This is why I think we need to revitalize the research!

Two concrete examples

The following two examples explain what I mean.

Example 1: Why would we need more papers on Massive MIMO with uncorrelated Rayleigh fading channels and maximum ratio (MR) processing? We already know that practical channels are spatially correlated and other processing methods are vastly superior to MR while having practically affordable complexity.

Example 2: Why would we need more papers on hybrid beamforming design for flat-fading channels? We already know that the hybrid architecture is only meaningful in wideband mmWave communications, in which case the channels feature frequency-selective fading. The generalization is non-trivial since it is mainly under frequency-selective conditions that the weaknesses/challenges of the hybrid approach appear.

I think that the above-mentioned simplifications were well motivated in the early 2010s when many of the seminal papers on Massive MIMO and mmWave communications appeared. It is usually easier to reach ambitious research goals by taking small steps towards them. It is acceptable to make strong simplifications in the first steps, to achieve the analytical tractability needed to develop a basic intuition and understanding. The later steps should, however, gradually move toward more realistic assumptions that also makes the analysis less tractable. We must continuously question if the initial insights apply also under more practical conditions or if they were artifacts of the initial simplifications.

Unfortunately, this happened far too seldom in the last decade. Our research community tends to prioritize analytical tractability over realistic models. If a model has been used in prior work, it can often be reused in new papers without being questioned by the reviewers. When I review a paper and question the system model, the authors usually respond with a list of previous papers that use the same model, rather than the theoretical motivation that I would like to see.

It seems to be far easier to publish papers with simple models that enable derivation of analytical “closed-form” expressions and development of “optimal” algorithms, than to tackle more realistic but challenging models where these things cannot be established. The two examples above are symptoms of this problem. We cannot continue in this way if we want to keep the research relevant in this new decade. Massive MIMO and mmWave communications will soon be mainstream technologies!

Entering a new decade

The start of the 2020s is a good time for the research community to start over and think big. Massive MIMO was proposed in a paper from 2010 and initially seemed too good to be true, possibly due to the simplicity of the models used in the early works. In a paper that appeared in 2015, we identified ten “myths” that had flourished when people with a negative attitude against the technology tried to pinpoint why it wouldn’t work in practice. Today – a decade after its inception – Massive MIMO is a key 5G technology and has even become a marketing term used by cellular operators. The US operator Sprint has reported that the first generation of Massive MIMO base stations improve the spectral efficiency by around 10x in their real networks.

I believe the history will repeat itself during this decade. The research into the next big physical layer technology will take off this year – we just don’t know what it will be. There are already plenty of non-technical papers that try to make predictions, so the quota for such papers is already filled. I’ve written one myself entitled “Massive MIMO is a Reality – What is Next? Five Promising Research Directions for Antenna Arrays”. What we need now are visionary technical papers (like the seminal Massive MIMO paper by Marzetta) that demonstrate mathematically how a new technology can achieve ten-fold performance improvements over the state-of-the-art, for example, in terms of spectral efficiency, reliability, latency, or some other relevant metric. Maybe one or two of the research directions listed in my paper will be at the center of 6G. Much research work remains before we can know, thus this is the right time to explore a wide range of new ideas.

The start of a new decade is a good time for the research community to start over and to think big. Massive MIMO was proposed in a paper from 2010 and initially seemed too good to be true, possibly due to the simplicity of the system models used in the early works. In a paper that appeared in 2015, we identified ten “myths” that had flourished when people tried to pinpoint why Massive MIMO wouldn’t work in practice. Today – a decade after its inception – Massive MIMO is a key 5G technology that has even become a marketing term used by cellular operators. It has been shown to improve the spectral efficiency by 10x in real networks.

I believe that the same procedure will repeat itself during this decade. The research into the next big physical layer technology will take off this year – we just don’t know what it will be. There are already plenty of non-technical papers that try to make predictions, so that quota has already been filled. I’ve written one myself entitled “Massive MIMO is a Reality – What is Next? Five Promising Research Directions for Antenna Arrays”. However, what we really need is visionary technical papers (like the seminal Massive MIMO paper by Marzetta) that demonstrate how we can actually achieve, say, ten-fold performance improvements over the state-of-the-art, concerning spectral efficiency, reliability, latency, or some other relevant metric. Maybe one or two of the research directions listed in my paper will become the main thing in 6G – much further work is needed before we can know.

Five ways to revitalize the research

To keep the wireless communication research relevant, we should stop considering minor variations on previously solved problems and instead focus either on implementation aspects of 5G or on basic research into entirely new methods that might eventually play a role in 6G. In both cases, I have the following five recommendations for how we can conduct more efficient and relevant research in this new decades.

1. We may start the research on new topics by using simplified models that are analytically tractable, but we must not get stuck in using those models. A beautiful analysis obtained with an unrealistic model might cause more confusion than it generates new practical insights. Just remember how a lot of Massive MIMO research focused on the pilot contamination problem, just because it happened to be the asymptotically limiting factor when using simplified models, while it is not the case in general.

2. We must be more respectful towards the underlying physics, particularly, electromagnetic theory. We cannot continue normalizing the pathloss variables or guesstimate how they can be computed. When developing a new technology, we must first get the basic models right. Otherwise we risk making fundamental mistakes and – even worse – trick others into repeating those mistakes for years to come. I covered the danger of normalization in a previous blog post.

3. We must not forget about previous methods when evaluating new methods but think carefully about what the true state-of-the-art is. For example, if we want to improve the performance of a cellular network by adding new equipment, we must compare it to existing equipment that could alternatively been added. For example, I covered the importance of comparing intelligent reflecting surfaces with relays in a previous blog post.

4. We must make sure new algorithms are reproducible and easily comparable, so that every paper is making useful progress. This can be achieved by publishing simulation code alongside papers and evaluating new algorithms in the same setup as previous algorithms. We might take inspiration from the machine learning field where ImageNet is a common benchmark.

5. We must not take the correctness of models and results in published papers for granted. This is particularly important nowadays when new IEEE journals with very short review times are getting traction. Few scientific experts can promise to make a proper review of a full-length paper in just seven days; thus, many reviews will be substandard. This is a step in the wrong direction and can severely reduce the quality and trustworthiness of published papers.

Let us all make an effort to revitalize the research methodology and selection of research problems to solve in the 2020s. If you have further ideas please share them in the comment field!

Is the Pathloss Larger at mmWave Frequencies?

The range of mmWave communication signals is often said to be lower than for signals in the conventional sub-6 GHz bands. This is usually also the case but the reason for it might not be the one that you think. I will explain what I mean in this blog post.

If one takes a look at the classical free-space pathloss formula, the received power P_r is

(1)   \begin{equation*}P_r = P_t \left( \frac{\lambda}{4\pi d} \right)^2,\end{equation*}

where the transmit power is denoted by P_t, the wavelength is \lambda, and the propagation distance is d. This formula shows that the received power is proportional to the wavelength and, thus, will be smaller when we increase the carrier frequency; that is, the received power is lower at 60 GHz (\lambda=5 mm) than at 3 GHz (\lambda=10 cm). But there is an important catch: the dependence on \lambda is due to the underlying assumption of having a receive antenna with the effective area

(2)   \begin{equation*}A = \frac{\lambda^2}{4\pi}.\end{equation*}

Hence, if we consider a receive antenna with arbitrary effective area A, we can instead write the received signal in (1) as

(3)   \begin{equation*}P_r = P_t  \frac{A}{4\pi d^2},\end{equation*}

which is frequency-independent as long as we keep the antenna area A fixed as we change the carrier frequency. Since the area of a fixed-gain antenna actually is proportional to \lambda^2, as exemplified in (2), in practice we will need to use arrays of multiple antennas in mmWave bands to achieve the same total antenna area A as in lower bands. This is what is normally done in mmWave communications for cellular networks, while a single high-gain antenna with large area can be used for fixed links (e.g., backhaul between base stations or between a satellite and ground station). As explained in Section 7.5 of Massive MIMO Networks, one can actually play with the antenna areas at both the transmitter and receiver to keep the same pathloss in the mmWave bands, while actually reducing the total antenna area!

So why is the signal range shorter in mmWave bands?

The main reasons for the shorter range are:

  • Larger propagation losses in non-line-of-sight scenarios, for example, due to less scattering (fewer propagation paths) and larger penetration losses.
  • The use more bandwidth, which leads to lower SNR.

Edit: The atmospheric, molecular, and rain attenuation losses are larger in some mmWave bands, but it is primarily a concern for macro cells with ranges measured in kilometers (see Figures 2-3 in this paper).

Channel Sparsity in Massive MIMO

Channel estimation is critical in Massive MIMO. One can use the basic least-squares (LS) channel estimator to learn the multi-antenna channel from pilot signals, but if one has prior information about the channel’s properties, that can be used to improve the estimation quality. For example, if one knows the average channel gain, the linear minimum mean-squared error (LMMSE) estimator can be used, as in most of the literature on Massive MIMO.

There are many attempts to exploit further channel properties, in particularly channel sparsity is commonly assumed in the academic literature. I have recently received several questions about this topic, so I will take the opportunity to give a detailed answer. In particular, this blog post discusses temporal and spatial sparsity.

Temporal sparsity

This means that the channel’s impulse response contains one or several pulses with zeros in between. These pulses could represent different paths, in a multipath environment, which are characterized by non-overlapping time delays. This does not happen in a rich scattering environment with many diffuse scatterers having overlapping delays, but it could happen in mmWave bands where there are only a few reflected paths.

If one knows that the channel has temporal sparsity, one can utilize such knowledge in the estimator to determine when the pulses arrive and what properties (e.g., phase and amplitude) each one has. However, several hardware-related conditions need to be satisfied. Firstly, the sampling rate must be sufficiently high so that the pulses can be temporally resolved without being smeared together by aliasing. Secondly, the receiver filter has an impulse response that spreads signals out over time, and this must not remove the sparsity.

Spatial sparsity

This means that the multipath channel between the transmitter and receiver only involves paths in a limited subset of all angular directions. If these directions are known a priori, it can be utilized in the channel estimation to only estimate the properties (e.g., phase and amplitude) in those directions. One way to determine the existence of spatial sparsity is by computing a spatial correlation matrix of the channel and analyze its eigenvalues. Each eigenvalue represents the average squared amplitude in one set of angular directions, thus spatial sparsity would lead to some of the eigenvalues being zero.

Just as for temporal sparsity, it is not necessary that spatial sparsity can be utilized even if it physically exists. The antenna array must be sufficiently large (in terms of aperture and number of antennas) to differentiate between directions with signals and directions without signals. If the angular distance between the channel paths is smaller than the beamwidth of the array, it will smear out the paths over many angles. The following example shows that Massive MIMO is not a guarantee for utilizing spatial sparsity.

The figure below considers a 64-antenna scenario where the received signal contains only three paths, having azimuth angles -20°, +30° and +40° and a common elevation angle of 0°. If the 64 antennas are vertically stacked (denoted 1 x 64), the signal gain seems to be the same from all azimuth directions, so the sparsity cannot be observed at all. If the 64 antennas are horizontally stacked (denoted 64 x 1), the signal gain has distinct peaks at the angles of the three paths, but there are also ripples that could have hidden other paths. A more common 64-antenna configuration is a 8 x 8 planar array, for which only two peaks are visible. The paths 30° and 40° are lumped together due to the limited resolution of the array.

Figure: The received signal gain that is observed from different azimuth angles, using different array geometries. The true signal only contains three paths, which are coming from the azimuth angles -20°, +30° and +40°.

In addition to have a sufficiently high spatial resolution, a phase-calibrated array might be needed to make use of sparsity, since random phase differences between the antennas could destroy the structure.

Do we need sparsity?

There is no doubt that temporal and spatial sparsity exist, but not every channel will have it. Moreover, the transceiver hardware will destroy the sparsity unless a series of conditions are satisfied. That is why one should not build a wireless technology that requires channel sparsity because then it might not function properly for many of the users. Sparsity is rather something to utilize to improve the channel estimation in certain special cases.

TDD-reciprocity based Massive MIMO, as proposed by Marzetta and further considered in my book Massive MIMO networks, does not require channel sparsity. However, sparsity can be utilized as an add-on when available. In contrast, there are many FDD-based frameworks that require channel sparsity to function properly.

Reproduce the results: The code that was used to produce the plot can be downloaded from my GitHub.

Massive MIMO Enables Fixed Wireless Access

The largest performance gains from Massive MIMO are achieved when the technology is used for spatial multiplexing of many users. These gains can only be harnessed when there actually are many users that ask for data services simultaneously. I sometimes hear the following negative comments about Massive MIMO:

  1. The data traffic is so bursty that there seldom are more than one or two users that ask for data simultaneously.
  2. When there are multiple users, the uplink SNR is often too poor to get the high quality channel state information that is needed to truly benefit from spatial multiplexing.

These points might indeed be true in current cellular networks, but I believe the situation will change in the future. In particular, the new fixed wireless access services require that the network can simultaneously deliver high-rate services to many customers. The business case for these service rely strongly on Massive MIMO and spatial multiplexing, so that one base station site can guarantee a certain data rate to as many customers as possible (just as fiber and cable connections can). The fixed installation of the customer equipment means that channel state information is much easier to acquire (due to better channel conditions, higher transmit power, and absence of mobility). The following video from Ericsson touches upon some of these aspects:

Scalable Cell-Free Massive MIMO

Cell-free massive MIMO is likely one of the technologies that will form the backbone of any xG with x>5. What distinguishes cell-free massive MIMO from distributed MIMO, network MIMO or cooperative multi-point (CoMP)? The short answer is that cell-free massive MIMO works, it can deliver uniformly good service throughout the coverage area, and it requires no prior knowledge of short-term CSI (just like regular cellular massive MIMO). A longer answer is here. The price to pay for this superiority, no shock, is the lack of scalability: for canonical cell-free massive MIMO there is a practical limit on how large the system can be, and this scalability concerns both the power control, the signal processing, and the organization of the backhaul.

At ICC this year we presented this approach towards scalable cell-free massive MIMO. A key insight is that power control is extremely vital for performance, and a scalable cell-free massive MIMO solution requires a scalable power control policy. No surprise, some performance must be sacrificed relative to canonical cell-free massive MIMO. Coincidentally, another paper in the same session (WC-26) also devised a power control policy with similar qualities!

Take-away point? There are only three things that matter for the design of cell-free massive MIMO signal processing algorithms and power control policies: scalability, scalability and scalability…