Cell-free massive MIMO is likely one of the technologies that will form the backbone of any xG with x>5. What distinguishes cell-free massive MIMO from distributed MIMO, network MIMO or cooperative multi-point (CoMP)? The short answer is that cell-free massive MIMO works, it can deliver uniformly good service throughout the coverage area, and it requires no prior knowledge of short-term CSI (just like regular cellular massive MIMO). A longer answer is here. The price to pay for this superiority, no shock, is the lack of scalability: for canonical cell-free massive MIMO there is a practical limit on how large the system can be, and this scalability concerns both the power control, the signal processing, and the organization of the backhaul.
At ICC this year we presented this approach towards scalable cell-free massive MIMO. A key insight is that power control is extremely vital for performance, and a scalable cell-free massive MIMO solution requires a scalable power control policy. No surprise, some performance must be sacrificed relative to canonical cell-free massive MIMO. Coincidentally, another paper in the same session (WC-26) also devised a power control policy with similar qualities!
Take-away point? There are only three things that matter for the design of cell-free massive MIMO signal processing algorithms and power control policies: scalability, scalability and scalability…
Check out this video, produced by the IEEE Signal Processing Society’s Signal Processing for Communications and Networking (SPCOM) technical committee. The video explains to the layman what 5G is for, and how massive MIMO comes in…
In the research on Beyond Massive MIMO systems, a number of new terminologies have been introduced with names such as:
Intelligent reflective surfaces.
These are basically the same things (and there are many variations on the three names), which is important to recognize so that the research community can analyze them in a joint manner.
The main concept has its origin in reflectarray antennas, which is a class of directive antennas that behave a bit like parabolic reflectors but can be deployed on a flat surface, such as a wall. More precisely, a reflectarray antenna takes an incoming signal wave and reflects it into a predetermined spatial direction, as illustrated in the following figure:
Instead of relying on the physical shape of the antenna to determine the reflective properties (as is the case for parabolic reflectors), a reflectarray consists of many reflective elements that impose element-specific time delays to their reflected signals. These elements are illustrated by the dots on the surface in Figure 1. In this way, the reflected wave is beamformed and the reflectarray can be viewed as a passive MIMO array. The word passive refers to the fact that the radio signal is not generated in the array but elsewhere. Since a given time delay corresponds to a different phase shift depending on the signal’s frequency, reflectarrays are primarily useful for reflecting narrowband signals in a single direction.
Reconfigurable reflectarrays can change the time delays of each element to steer the reflected beam in different directions at different points in time. The research on this topic has been going on for decades; the book “Reflectarray Antennas: Analysis, Design, Fabrication, and Measurement” from 2014 by Shaker et al. describes many different implementation concepts and experiments.
Recently, there is a growing interest in reconfigurable reflectarrays from the communication theoretic and signal processing community. This is demonstrated by a series of new overview papers that focus on applications rather than hardware implementations:
The elements in the reflecting surface in Figure 1 are called meta-atoms or reflective elements in these overview papers. The size of a meta-atom/element is at the order of the wavelength, just as for conventional low-gain antennas. In simple words, we can view an element as an antenna that captures a radio signal, keeps it inside itself for a short time period to create a time-delay, and then radiates the signal again. One can thus view it as a relay with a delay–and-forward protocol. Even if the signals are not amplified by a reconfigurable reflectarray, there is a non-negligible energy consumption related to the control protocols and the reconfigurability of the elements.
It is important to distinguish between reflecting surfaces and the concept of large intelligent surfaces with active radio transmitters/receivers, which was proposed for data transmission and positioning by Hu, Rusek, and Edfors. This is basically an active MIMO array with densely deployed antennas.
What are the prospects of the technology?
The recent overview papers describe a number of potential use cases for reconfigurable reflectarrays (metasurfaces) in wireless networks, such as range extension, improved physical layer security, wireless power transfer, and spatial modulation. From a conceptual perspective, it is indeed an exciting prospect to build future networks where not only the transmitter and receiver algorithms can be optimized, but the propagation environment can be partially controlled.
However, the research on this topic is still in its infancy. It is of paramount importance to demonstrate practically important use cases where reconfigurable reflectarrays are fundamentally better than existing methods. If it should be economically feasible to turn the technology into a commercial reality, we should not look for use cases where a 10% gain can be achieved but rather a 10x or 100x gain. This is what Marzetta demonstrated with Massive MIMO and this is also what it can deliver in 5G.
I had an interesting conversation with a respected colleague who expressed some significant reservations against massive MIMO. Let’s dissect the arguments.
The first argument against massive MIMO was that most traffic is indoors, and that deployment of large arrays indoors is impractical and that outdoor-to-indoor coverage through massive MIMO is undesirable (or technically infeasible). I think the obvious counterargument here is that before anything else, the main selling argument for massive MIMO is not indoor service provision but outdoor macrocell coverage: the ability of TDD/reciprocity based beamforming to handle high mobility, and efficiently suppress interference thus provide cell-edge coverage. (The notion of a “cell-edge” user should be broadly interpreted: anyone having poor nominal signal-to-interference-and-noise ratio, before the MIMO processing kicks in.) But nothing prevents massive MIMO from being installed indoors, if capacity requirements are so high that conventional small cell or WiFi technology cannot handle the load. Antennas could be integrated into walls, ceilings, window panes, furniture or even pieces of art. For construction of new buildings, prefabricated building blocks are often used and antennas could be integrated into these already at their production. Nothing prevents the integration of thousands of antennas into natural objects in a large room.
Outdoor-to-indoor coverage doesn’t work? Importantly, current systems provide outdoor-to-indoor coverage already, and there is no reason Massive MIMO would not do the same (to the contrary, adding base station antennas is always beneficial for performance!). But yet the ideal deployment scenario of massive MIMO is probably not outdoor-to-indoor so this seems like a valid point, partly. The arguments against the outdoor-to-indoor are that modern energy-saving windows have a coating that takes out 20 dB, at least, of the link budget. In addition, small angular spreads when all signals have to pass through windows (maybe except for in wooden buildings) may reduce the rank of the channel so much that not much multiplexing to indoor users is possible. This is mostly speculation and not sure whether experiments are available to confirm, or refute it.
Let’s move on to the second argument. Here the theme is that as systems use larger and larger bandwidths, but can’t increase radiated power, the maximum transmission distance shrinks (because the SNR is inversely proportional to the bandwidth). Hence, the cells have to get smaller and smaller, and eventually, there will be so few users per cell that the aggressive spatial multiplexing on which massive MIMO relies becomes useless – as there is only a single user to multiplex. This argument may be partly valid at least given the traffic situation in current networks. But we do not know what future requirements will be. New applications may change the demand for traffic entirely: augmented or virtual reality, large-scale communication with drones and robots, or other use cases that we cannot even envision today.
It is also not too obvious that with massive MIMO, the larger bandwidths are really required. Spatial multiplexing to 20 terminals improves the bandwidth efficiency 20 times compared to conventional technology. So instead of 20 times more bandwidth, one may use 20 times more antennas. Significant multiplexing gains are not only proven theoretically but have been demonstrated in commercial field trials. It is argued sometimes that traffic is bursty so that these multiplexing gains cannot materialize in practice, but this is partly a misconception and partly a consequence of defect higher-layer designs (most importantly TCP/IP) and vested interests in these flawed designs. For example, for the services that constitute most of the raw bits, especially video streaming, there is no good reason to use TCP/IP at all. Hopefully, once the enormous potential of massive MIMO physical layer technology becomes more widely known and understood, the market forces will push a re-design of higher-layer and application protocols so that they can maximally benefit from the massive MIMO physical layer. Does this entail a complete re-design of the Internet? No, probably not, but buffers have to be installed and parts of the link layer should be revamped to maximally use the “wires in the air”, ideally suited for aggressive multiplexing of circuit-switched data, that massive MIMO offers.
In the May issue of the IEEE Signal Processing Magazine, you can read the most personal article that I have written so far. It is entitled “Reproducible Research: Best Practices and Potential Misuse” and is available on IEEEXplore and ArXiv.org. In this article, I share my experiences of making simulation code openly available.
I started with doing that in 2012, the year after I received my Ph.D. degree. It was very uncommon to make code publicly available in the MIMO field at that time, but I think we are definitely moving in the right direction. For example, the hype around machine learning has encouraged people to create open datasets and to share Python code. The Machine Learning for Communications Emerging Technologies Initiative by IEEE ComSoc has recently created a website with simulation code, which contains tens of contributions from many different authors. A few of them are related to Massive MIMO!
One important side-effect of making my code available is that I force myself to write the code as cleanly as possible. This is incredibly useful if you are going to reuse parts of the code in future publications. For example, when I wrote the paper “Making Cell-Free Massive MIMO Competitive With MMSE Processing and Centralized Implementation” earlier this year, I could reuse a lot of the code from my book Massive MIMO Networks. I was amazed by how little time it actually took to generate the simulations for that paper. The simulation setup is entirely different, but I could anyway reuse many of the signal processing and optimization algorithms that I had implemented earlier.
There has been a lot of fuss about hybrid analog-digital beamforming in the development of 5G. Strangely, it is not because of this technology’s merits but rather due to general disbelief in the telecom industry’s ability to build fully digital transceivers in frequency bands above 6 GHz. I find this rather odd; we are living in a society that becomes increasingly digitalized, with everything changing from being analog to digital. Why would the wireless technology suddenly move in the opposite direction?
When Marzetta published his seminal Massive MIMO paper in 2010, the idea of having an array with a hundred or more fully digital antennas was considered science fiction, or at least prohibitively costly and power consuming. Today, we know that Massive MIMO is actually a pre-5G technology, with 64-antenna systems already deployed in LTE systems operating below 6 GHz. These antenna panels are very commercially competitive; 95% of the base stations that Huawei are currently selling have at least 32 antennas. The fast technological development demonstrates that the initial skepticism against Massive MIMO was based on misconceptions rather than fundamental facts.
In the same way, there is nothing fundamental that prevents the development of fully digital transceivers in mmWave bands, but it is only a matter of time before such transceivers are developed and will dominate the market. With digital beamforming, we can get rid of the complicated beam-searching and beam-tracking algorithms that have been developed over the past five years and achieve a simpler and more reliable system operation, particularly, using TDD operation and reciprocity-based beamforming.
I didn’t jump onto the hybrid beamforming research train since it already had many passengers and I thought that this research topic would become irrelevant after 5-10 years. But I was wrong – it now seems that the digital solutions will be released much earlier than I thought. At the 2018 European Microwave Conference, NEC Cooperation presented an experimental verification of an active antenna system (AAS) for the 28 GHz band with 24 fully digital transceiver chains. The design is modular and consists of 24 horizontally stacked antennas, which means that the same design could be used for even larger arrays.
Tomoya Kaneko, Chief Advanced Technologist for RF Technologies Development at NEC, told me that they target to release a fully digital AAS in just a few years. So maybe hybrid analog-digital beamforming will be replaced by digital beamforming already in the beginning of the 5G mmWave deployments?
That said, I think that the hybrid beamforming algorithms will have new roles to play in the future. The first generations of new communication systems might reach faster to the market by using a hybrid analog-digital architecture, which require hybrid beamforming, than waiting for the fully digital implementation to be finalized. This could, for example, be the case for holographic beamforming or MIMO systems operating in the sub-THz bands. There will also remain to exist non-mobile point-to-point communication systems with line-of-sight channels (e.g., satellite communications) where analog solutions are quite enough to achieve all the necessary performance gains that MIMO can provide.
This is why the sub-6 GHz bands will continue to be the backbone of the future 5G networks, just as in previous cellular generations, while mmWave bands will define the best-case performance. A clear example of this is the 5G deployment strategy of the US operator Sprint, which I heard about in a keynote by John Saw, CTO at Sprint, at the Brooklyn 5G Summit. (Here is a video of the keynote.)
Sprint will use spectrum in the 600 MHz band to achieve wide-spread 5G coverage. This low frequency will enable spatial multiplexing of many users if Massive MIMO is used, but the data rates per user will be rather limited since only a few tens of MHz of bandwidth is available. Nevertheless, this band will define the guaranteed service level of the 5G network.
In addition, Sprint has 120 MHz of TDD spectrum in the 2.5 GHz band and are deploying 64-antenna Massive MIMO base stations in many major cities; there will be more than 1000 sites in 2019. These can both be used to simultaneously do spatial multiplexing of many users and to improve the per-user data rates thanks to the beamforming gains. John Saw pointed out that the word “massive” in Massive MIMO sounds scary, but the actual arrays are neat and compact in the 2.5 GHz band. He also explained that this frequency band supports high mobility, which is very challenging at mmWave frequencies. The mobility support is demonstrated in the following video:
Tom Marzetta, the originator of Massive MIMO, attended the keynote and gave me the following comment: “It is gratifying to hear the CTO of Sprint confirm, through actual commercial deployments, what the advocates of Massive MIMO have said for so long.”
Interestingly, Sprint noticed that their customers immediately used more data when Massive MIMO was turned on, and there were more simultaneous users in the network. This demonstrates the fact that whenever you create a more capable cellular network, the users will be encouraged to use more data and new use cases will gradually appear. This is why we need to continue looking for ways to improve the spectral efficiency beyond 5G and Massive MIMO.