Cell-free massive MIMO is likely one of the technologies that will form the backbone of any xG with x>5. What distinguishes cell-free massive MIMO from distributed MIMO, network MIMO or cooperative multi-point (CoMP)? The short answer is that cell-free massive MIMO works, it can deliver uniformly good service throughout the coverage area, and it requires no prior knowledge of short-term CSI (just like regular cellular massive MIMO). A longer answer is here. The price to pay for this superiority, no shock, is the lack of scalability: for canonical cell-free massive MIMO there is a practical limit on how large the system can be, and this scalability concerns both the power control, the signal processing, and the organization of the backhaul.
At ICC this year we presented this approach towards scalable cell-free massive MIMO. A key insight is that power control is extremely vital for performance, and a scalable cell-free massive MIMO solution requires a scalable power control policy. No surprise, some performance must be sacrificed relative to canonical cell-free massive MIMO. Coincidentally, another paper in the same session (WC-26) also devised a power control policy with similar qualities!
Take-away point? There are only three things that matter for the design of cell-free massive MIMO signal processing algorithms and power control policies: scalability, scalability and scalability…
Check out this video, produced by the IEEE Signal Processing Society’s Signal Processing for Communications and Networking (SPCOM) technical committee. The video explains to the layman what 5G is for, and how massive MIMO comes in…
In the research on Beyond Massive MIMO systems, a number of new terminologies have been introduced with names such as:
Reconfigurable reflectarrays;
Software-controlled metasurfaces;
Intelligent reflective surfaces.
These are basically the same things (and there are many variations on the three names), which is important to recognize so that the research community can analyze them in a joint manner.
Background
The main concept has its origin in reflectarray antennas, which is a class of directive antennas that behave a bit like parabolic reflectors but can be deployed on a flat surface, such as a wall. More precisely, a reflectarray antenna takes an incoming signal wave and reflects it into a predetermined spatial direction, as illustrated in the following figure:
Instead of relying on the physical shape of the antenna to determine the reflective properties (as is the case for parabolic reflectors), a reflectarray consists of many reflective elements that impose element-specific time delays to their reflected signals. These elements are illustrated by the dots on the surface in Figure 1. In this way, the reflected wave is beamformed and the reflectarray can be viewed as a passive MIMO array. The word passive refers to the fact that the radio signal is not generated in the array but elsewhere. Since a given time delay corresponds to a different phase shift depending on the signal’s frequency, reflectarrays are primarily useful for reflecting narrowband signals in a single direction.
Reconfigurability
Reconfigurable reflectarrays can change the time delays of each element to steer the reflected beam in different directions at different points in time. The research on this topic has been going on for decades; the book “Reflectarray Antennas: Analysis, Design, Fabrication, and Measurement” from 2014 by Shaker et al. describes many different implementation concepts and experiments.
Recently, there is a growing interest in reconfigurable reflectarrays from the communication theoretic and signal processing community. This is demonstrated by a series of new overview papers that focus on applications rather than hardware implementations:
The elements in the reflecting surface in Figure 1 are called meta-atoms or reflective elements in these overview papers. The size of a meta-atom/element is smaller than the wavelength, just as for conventional low-gain antennas. In simple words, we can view an element as an antenna that captures a radio signal, keeps it inside itself for a short time period to create a time-delay, and then radiates the signal again. One can thus view it as a relay with a delay–and-forward protocol. Even if the signals are not amplified by a reconfigurable reflectarray, there is a non-negligible energy consumption related to the control protocols and the reconfigurability of the elements.
It is important to distinguish between reflecting surfaces and the concept of large intelligent surfaces with active radio transmitters/receivers, which was proposed for data transmission and positioning by Hu, Rusek, and Edfors. This is basically an active MIMO array with densely deployed antennas.
What are the prospects of the technology?
The recent overview papers describe a number of potential use cases for reconfigurable reflectarrays (metasurfaces) in wireless networks, such as range extension, improved physical layer security, wireless power transfer, and spatial modulation. From a conceptual perspective, it is indeed an exciting prospect to build future networks where not only the transmitter and receiver algorithms can be optimized, but the propagation environment can be partially controlled.
However, the research on this topic is still in its infancy. It is of paramount importance to demonstrate practically important use cases where reconfigurable reflectarrays are fundamentally better than existing methods. If it should be economically feasible to turn the technology into a commercial reality, we should not look for use cases where a 10% gain can be achieved but rather a 10x or 100x gain. This is what Marzetta demonstrated with Massive MIMO and this is also what it can deliver in 5G.
I haven’t seen any convincing demonstrations of such use cases of reflectarray antennas (metasurfaces) thus far. On the contrary, my new paper “Intelligent Reflecting Surface vs. Decode-and-Forward: How Large Surfaces Are Needed to Beat Relaying?” shows that the new technology can indeed provide range extension, but a basic single-antenna decode-and-forward relay can outperform it unless the surface is very large. There is much left to do on this topic!
I had an interesting conversation with a respected colleague who expressed some significant reservations against massive MIMO. Let’s dissect the arguments.
The first argument against massive MIMO was that most traffic is indoors, and that deployment of large arrays indoors is impractical and that outdoor-to-indoor coverage through massive MIMO is undesirable (or technically infeasible). I think the obvious counterargument here is that before anything else, the main selling argument for massive MIMO is not indoor service provision but outdoor macrocell coverage: the ability of TDD/reciprocity based beamforming to handle high mobility, and efficiently suppress interference thus provide cell-edge coverage. (The notion of a “cell-edge” user should be broadly interpreted: anyone having poor nominal signal-to-interference-and-noise ratio, before the MIMO processing kicks in.) But nothing prevents massive MIMO from being installed indoors, if capacity requirements are so high that conventional small cell or WiFi technology cannot handle the load. Antennas could be integrated into walls, ceilings, window panes, furniture or even pieces of art. For construction of new buildings, prefabricated building blocks are often used and antennas could be integrated into these already at their production. Nothing prevents the integration of thousands of antennas into natural objects in a large room.
Outdoor-to-indoor coverage doesn’t work? Importantly, current systems provide outdoor-to-indoor coverage already, and there is no reason Massive MIMO would not do the same (to the contrary, adding base station antennas is always beneficial for performance!). But yet the ideal deployment scenario of massive MIMO is probably not outdoor-to-indoor so this seems like a valid point, partly. The arguments against the outdoor-to-indoor are that modern energy-saving windows have a coating that takes out 20 dB, at least, of the link budget. In addition, small angular spreads when all signals have to pass through windows (maybe except for in wooden buildings) may reduce the rank of the channel so much that not much multiplexing to indoor users is possible. This is mostly speculation and not sure whether experiments are available to confirm, or refute it.
Let’s move on to the second argument. Here the theme is that as systems use larger and larger bandwidths, but can’t increase radiated power, the maximum transmission distance shrinks (because the SNR is inversely proportional to the bandwidth). Hence, the cells have to get smaller and smaller, and eventually, there will be so few users per cell that the aggressive spatial multiplexing on which massive MIMO relies becomes useless – as there is only a single user to multiplex. This argument may be partly valid at least given the traffic situation in current networks. But we do not know what future requirements will be. New applications may change the demand for traffic entirely: augmented or virtual reality, large-scale communication with drones and robots, or other use cases that we cannot even envision today.
It is also not too obvious that with massive MIMO, the larger bandwidths are really required. Spatial multiplexing to 20 terminals improves the bandwidth efficiency 20 times compared to conventional technology. So instead of 20 times more bandwidth, one may use 20 times more antennas. Significant multiplexing gains are not only proven theoretically but have been demonstrated in commercial field trials. It is argued sometimes that traffic is bursty so that these multiplexing gains cannot materialize in practice, but this is partly a misconception and partly a consequence of defect higher-layer designs (most importantly TCP/IP) and vested interests in these flawed designs. For example, for the services that constitute most of the raw bits, especially video streaming, there is no good reason to use TCP/IP at all. Hopefully, once the enormous potential of massive MIMO physical layer technology becomes more widely known and understood, the market forces will push a re-design of higher-layer and application protocols so that they can maximally benefit from the massive MIMO physical layer. Does this entail a complete re-design of the Internet? No, probably not, but buffers have to be installed and parts of the link layer should be revamped to maximally use the “wires in the air”, ideally suited for aggressive multiplexing of circuit-switched data, that massive MIMO offers.
In the May issue of the IEEE Signal Processing Magazine, you can read the most personal article that I have written so far. It is entitled “Reproducible Research: Best Practices and Potential Misuse” and is available on IEEEXplore and ArXiv.org. In this article, I share my experiences of making simulation code openly available.
I started with doing that in 2012, the year after I received my Ph.D. degree. It was very uncommon to make code publicly available in the MIMO field at that time, but I think we are definitely moving in the right direction. For example, the hype around machine learning has encouraged people to create open datasets and to share Python code. The Machine Learning for Communications Emerging Technologies Initiative by IEEE ComSoc has recently created a website with simulation code, which contains tens of contributions from many different authors. A few of them are related to Massive MIMO!
One important side-effect of making my code available is that I force myself to write the code as cleanly as possible. This is incredibly useful if you are going to reuse parts of the code in future publications. For example, when I wrote the paper “Making Cell-Free Massive MIMO Competitive With MMSE Processing and Centralized Implementation” earlier this year, I could reuse a lot of the code from my book Massive MIMO Networks. I was amazed by how little time it actually took to generate the simulations for that paper. The simulation setup is entirely different, but I could anyway reuse many of the signal processing and optimization algorithms that I had implemented earlier.
There has been a lot of fuss about hybrid analog-digital beamforming in the development of 5G. Strangely, it is not because of this technology’s merits but rather due to general disbelief in the telecom industry’s ability to build fully digital transceivers in frequency bands above 6 GHz. I find this rather odd; we are living in a society that becomes increasingly digitalized, with everything changing from being analog to digital. Why would the wireless technology suddenly move in the opposite direction?
When Marzetta published his seminal Massive MIMO paper in 2010, the idea of having an array with a hundred or more fully digital antennas was considered science fiction, or at least prohibitively costly and power consuming. Today, we know that Massive MIMO is actually a pre-5G technology, with 64-antenna systems already deployed in LTE systems operating below 6 GHz. These antenna panels are very commercially competitive; 95% of the base stations that Huawei are currently selling have at least 32 antennas. The fast technological development demonstrates that the initial skepticism against Massive MIMO was based on misconceptions rather than fundamental facts.
In the same way, there is nothing fundamental that prevents the development of fully digital transceivers in mmWave bands, but it is only a matter of time before such transceivers are developed and will dominate the market. With digital beamforming, we can get rid of the complicated beam-searching and beam-tracking algorithms that have been developed over the past five years and achieve a simpler and more reliable system operation, particularly, using TDD operation and reciprocity-based beamforming.
I didn’t jump onto the hybrid beamforming research train since it already had many passengers and I thought that this research topic would become irrelevant after 5-10 years. But I was wrong – it now seems that the digital solutions will be released much earlier than I thought. At the 2018 European Microwave Conference, NEC Cooperation presented an experimental verification of an active antenna system (AAS) for the 28 GHz band with 24 fully digital transceiver chains. The design is modular and consists of 24 horizontally stacked antennas, which means that the same design could be used for even larger arrays.
Tomoya Kaneko, Chief Advanced Technologist for RF Technologies Development at NEC, told me that they target to release a fully digital AAS in just a few years. So maybe hybrid analog-digital beamforming will be replaced by digital beamforming already in the beginning of the 5G mmWave deployments?
That said, I think that the hybrid beamforming algorithms will have new roles to play in the future. The first generations of new communication systems might reach faster to the market by using a hybrid analog-digital architecture, which require hybrid beamforming, than waiting for the fully digital implementation to be finalized. This could, for example, be the case for holographic beamforming or MIMO systems operating in the sub-THz bands. There will also remain to exist non-mobile point-to-point communication systems with line-of-sight channels (e.g., satellite communications) where analog solutions are quite enough to achieve all the necessary performance gains that MIMO can provide.
Channel fading has always been a limiting factor in wireless communications, which is why various diversity schemes have been developed to combat fading (and other channel impairments). The basic idea is to obtain many “independent” observations of the channel and exploit that it is unlikely that all of these observations are subject to deep fade in parallel. These observations can be obtained over time, frequency, space, polarization, etc.
Antenna selection is the basic form of space diversity. Suppose a base station (BS) equipped with multiple antennas applies antenna selection. In the uplink, the BS only uses the antenna that currently gives the highest signal-to-interference-and-noise ratio (SINR). In the downlink, the BS only transmits from the antenna that currently has the highest SINR. As the user moves around, the fading changes and we, therefore, need to reselect which antenna to use.
The term antenna selection diversity can be traced back to the 1980s, but this diversity scheme was analyzed already in the 1950s. One well-cited paper from that time is Linear Diversity Combining Techniques by D. G. Brennan. This paper demonstrates mathematically and numerically that selection diversity is suboptimal, while the scheme called maximum-ratio combining (MRC) always provides higher SINR. Hence, instead of only selecting one antenna, it is preferable for the BS to coherently combine the signals from/to all the antennas to maximize the SINR. When the MRC scheme is applied in Massive MIMO with a very large number of antennas, we often talk about channel hardening but this is nothing but an extreme form of space diversity that almost entirely removes the fading effect.
Even if the suboptimality of selection diversity has been known for 60 years, the antenna selection concept has continued to be analyzed in the MIMO literature and recently also in the context of Massive MIMO. Many recent papers are considering a generalization of the scheme that is known as antenna subset selection, where a subset of the antennas is selected and then MRC is applied using only these ones.
Why use antenna selection?
A common motivation for using antenna selection is that it would be too expensive to equip every antenna with a dedicated transceiver chain in Massive MIMO, therefore we need to sacrifice some of the performance to achieve a feasible implementation. This is a misleading motivation since Massive MIMO capable base stations have already been developed and commercially deployed. I think a better motivation would be that we can save power by only using a subset of the antennas at a time, particularly, when the traffic load is below the maximum system capacity so we don’t need to compromise with the users’ throughput.
The recent papers [1], [2], [3] on the topic consider narrowband MIMO channels. In contrast, Massive MIMO will in practice be used in widebandsystems where the channel fading is different between subcarriers. That means that one antenna will give the highest SINR on one subcarrier, while another antenna will give the highest SINR on another subcarrier. If we apply the antenna selection principle on a per-subcarrier basis in a wideband OFDM system with thousands of subcarriers, we will probably use all the antennas on at least one of the subcarrier. Consequently, we cannot turn off any of the antennas and the power saving benefits are lost.
We can instead apply the antenna selection scheme based on the average received power over all the subcarriers, but most channel models assume that this average power is the same for every base station antenna (this applies to both i.i.d. fading and correlated fading models, such as the one-ring model). That means that if we want to turn off antennas, we can select them randomly since all random selections will be (almost) equally good, and there are no selection diversity gains to be harvested.
This is why we can forget about antenna selection diversity in Massive MIMO!
It is only when the average channel gain is different among the antennas that antenna subset selection diversity might have a role to play. In that case, the antenna selection is governed by variations in the large-scale fading instead of variations in the small-scale fading, as conventionally assumed. This paper takes a step in that direction. I think this is the only case of antenna (subset)selection that might deserve further attention, while in general, it is a concept that can be forgotten.