Category Archives: News

Digital Millimeter Beamforming for 5G Terminals

5G used to be described as synonymous with millimeter-wave communications, but now when 5G networks are being rolled out all around the world, the focus is instead on Massive MIMO in the 3 GHz band. Moreover, millimeter-wave communications used to be synonymous with hybrid beamforming (e.g., using multiple analog phased arrays), often described as a necessary compromise between performance and hardware complexity. However, digital implementations are already on the way.

Last year, I wrote about experiments by NEC with a 24-antenna base station that carries out digital beamforming in the 28 GHz band. The same convergence towards digital solutions is happening for the chips that can be used in 5G terminals. The University of Michigan published experimental results at the 2020 IEEE Radio Frequency Integrated Circuits Symposium (RFIC) regarding a 16-element prototype for the 28 GHz band. The university calls it the “first digital single-chip millimeter-wave beamformer“. It is manufactured as a single chip using 40 nm CMOS technology and has a dimension of around 3 x 3 mm. The chip doesn’t include the 16 antenna elements (which are connected to it, see the image below and click on it to find larger images) but transceiver chains with low-noise amplifiers, phase-locked loops, analog-to-digital converters (ADCs), etc. While each antenna element has a separate ADC, groups of four adjacent ADCs are summing up their digital signals before they reach the baseband processor. Hence, from a MIMO perspective, this is essentially a digital four-antenna receiver.

One reason to call this a prototype rather than a full-fleshed solution is that the chip can only function as a receiver, but this doesn’t take away the fact that this is an important step forward. In an interview with the Michigan Engineering News Center, Professor Michael P. Flynn (who lead the research) is explaining that “With analog beamforming, you can only listen to one thing at a time” and “This chip represents more than seven years of work by multiple generations of graduate students”.

Needless to say, the first 5G base stations and cell phones that support millimeter-wave bands will make use of hybrid beamforming architectures. For example, the Ericsson Street Macro 6701 (that Verizon is utilizing in their network) contains multiple phased arrays, which can take 4 inputs and thereby produce up to 4 simultaneous beams. However, while the early adopters are making use of hybrid architectures, it becomes increasingly likely that fully digital architectures will be available when millimeter-wave technology becomes more widely adopted around the world.

Reciprocity-based Massive MIMO in Action

I have written several posts about Massive MIMO field trials during this year. A question that I often get in the comment field is: Have the industry built “real” reciprocity-based Massive MIMO systems, similar to what is described in my textbook, or is something different under the hood? My answer used to be “I don’t know” since the press releases are not providing such technical details.

The 5G standard supports many different modes of operation. When it comes to spatially multiplexing of users in the downlink, the way to configure the multi-user beamforming is of critical importance to control the inter-user interference. There are two main ways of doing that.

The first option is to let the users transmit pilot signals in the uplink and exploit the reciprocity between uplink and downlink to identify good downlink beams. This is the preferred operation from a theoretical perspective; if the base station has 64 transceivers, a single uplink pilot is enough to estimate the entire 64-dimensional channel. In 5G, the pilot signals that can be used for this purpose are called Sounding Reference Signals (SRS). The base station uses the uplink pilots from multiple users to select the downlink beamforming. This is the option that resembles what the textbooks on Massive MIMO are describing as the canonical form of the technology.

The second option is to let the base station transmit a set of downlink signals using different beams. The user device then reports back some measurement values describing how good the different downlink beams were. In 5G, the corresponding downlink signals are called Channel State Information Reference Signal (CSI-RS). The base station uses the feedback to select the downlink beamforming. The drawback of this approach is that 64 downlink signals must be transmitted to explore all 64 dimensions, so one might have to neglect many dimensions to limit the signaling overhead. Moreover, the resolution of the feedback from the users is limited.

In practice, the CSI-RS operation might be easier to implement, but the lower resolution in the beamforming selection will increase the interference between the users and ultimately limit how many users and layers per user that can be spatially multiplexed to increase the throughput.

New field trial based on SRS

The Signal Research Group has carried out a new field trial in Plano, Texas. The unique thing is that they confirm that the SRS operation was used. They utilized hardware and software from Ericsson, Accuver Americas, Rohde & Schwarz, and Gemtek. A 100 MHz channel bandwidth in the 3.5 GHz band was considered, the downlink power was 120 W, and a peak throughput of 5.45 Gbps was achieved. 8 user devices received two layers each, thus, the equipment performed spatial multiplexing of 16 layers. The setup was a suburban outdoor cell with inter-cell interference and a one-kilometer range. The average throughput per device was around 650 Mbps and was not much affected when the number of users increased from one to eight, which demonstrates that the beamforming could effectively deal with the interference.

It is great to see that “real” reciprocity-based Massive MIMO provides such great performance in practice. In the report that describes the measurements, the Signal Research Group states that not all 5G devices support the SRS-based mode. They had to look for the right equipment to carry out the experiments. Moreover, they point out that:

Operators with mid-band 5G NR spectrum (2.5 GHz and higher) will start deploying MU-MIMO, based on CSI-RS, later this year to increase spectral efficiency as their networks become loaded. The SRS variant of MU-MIMO will follow in the next six to twelve months, depending on market requirements and vendor support.

The following video describes the measurements in further detail:

Even Higher Spectral Efficiency in Massive MIMO Trials

There are basically two approaches to achieve high data rates in 5G: One can make use of huge bandwidths in mmWave bands or use Massive MIMO to spatially multiplex many users in the conventional sub-6 GHz bands.

As I wrote back in June, I am more impressed by the latter approach since it is more spectrally efficient and requires more technically advanced signal processing. I was comparing the 4.7 Gbps that Nokia had demonstrated over an 840 MHz mmWave band with the 3.7 Gbps that Huawei had demonstrated over 100 MHz of the sub-6 GHz spectrum. The former example achieves a spectral efficiency of 5.6 bps/Hz while the latter achieves 37 bps/Hz.

T-Mobile and Ericsson recently described a new field trial with even more impressive results. They made use of 100 MHz in the 2.5 GHz band and achieved 5.6 Gbps, corresponding to a spectral efficiency of 56 bps/Hz; an order-of-magnitude more than one can expect in mmWave bands!

The press release describes that the high data rate was achieved using a 64-antenna base station, similar to the product that I described earlier. Eight smartphones were served by spatially multiplexing and each received two parallel data streams (so-called layers). Hence, each user device obtained around 700 Mbps. On average, each of the 16 layers had a spectral efficiency of 3.5 bps/Hz, thus 16-QAM was probably utilized in the transmission.

I think these numbers are representative of what 5G can deliver in good coverage conditions. Hopefully, Massive MIMO based 5G networks will soon be commercially available in your country as well.

Machine Learning for Massive MIMO Communications

Since the pandemic made it hard to travel over the world, several open online seminar series have appeared with focus on different research topics. The idea seems to be to give researchers a platform to attend talks by international experts and enable open discussions.

There is a “One World Signal Processing Seminar Series” series, which has partially considered topics on multi-antenna communications. I want to highlight one such seminar. Professor Wei Yu (University of Toronto) is talking about Machine Learning for Massive MIMO Communications. The video contains a 45 minute long presentation plus another 30 minutes where questions are being answered.

There are also several other seminars in the series. For example, I gave a talk myself on “A programmable wireless world with reconfigurable intelligent surfaces“. On August 24, Prof. David Gesbert will talk about “Learning to team play”.

Chasing Data Rate Records

5G networks are supposed to be fast, to provide higher data rates than ever before. While indoor experiments have demonstrated huge data rates in the past, this has been the year where the vendors are competing in setting new data rate records in real deployments.

Nokia achieved 4.7 Gbps in an unnamed carrier’s cellular network in the USA in May 2020. This was achieved by dual connectivity where a user device simultaneously used 800 MHz of mmWave spectrum in 5G and 40 MHz of 4G spectrum.

The data rate with the Nokia equipment was higher than the 4.3 Gbps that Ericsson demonstrated in February 2020, but they “only” used 800 MHz of mmWave spectrum. While there are no details on how the 4.7 Gbps was divided between the mmWave and LTE bands, it is likely that Ericsson and Nokia achieved roughly the same data rate over the mmWave bands. The main new aspect was rather the dual connectivity between 4G and 5G.

The high data rates in these experiments are enabled by the abundant spectrum, while the spectral efficiency is only 5.4 bps/Hz. This can be achieved by 64-QAM modulation and high-rate channel coding, a combination of modulation and coding that was available already in LTE. From a technology standpoint, I am more impressed by reports of 3.7 Gbps being achieved over only 100 MHz of bandwidth, because then the spectral efficiency is 37 bps/Hz. That can be achieved in conventional sub-6 GHz bands which have better coverage and, thus, a more consistent 5G service quality.

How “Massive” are the Current Massive MIMO Base Stations?

I have written earlier that the Massive MIMO base stations that have been deployed by Sprint, and other operators, are very capable from a hardware perspective. They are equipped with 64 fully digital antennas, have a rather compact form factor, and can handle wide bandwidths in the 2-3 GHz bands. These facts are supported by documentation that can be accessed in the FCC databases.

However, we can only guess what is going on under the hood – what kind of signal processing algorithms have been implemented and how they perform compared to ideal cases described in the academic literature. Erik G. Larsson recently wrote about how Nokia improved its base station equipment via a software upgrade. Are the latest base stations now as “Massive MIMO”-like as they can become?

My guess is that there is still room for substantial improvements. The following joint video from Sprint and Nokia explains how their latest base stations are running 4G and 5G simultaneously on the same 64-antenna base station and are able to multiplex 16 layers.

This is the highest number of multiuser MIMO layers achieved in the US” according to the speaker. But if you listen carefully, they are actually sending 8 layers on 4G and 8 layers 5G. That doesn’t sum up to 16 layers! The things called layers in 3GPP are signals that are transmitted simultaneously in the same band, but with different spatial directivity. In every part of the spectrum, there are only 8 spatially multiplexed layers in the setup considered in the video.

It is indeed impressive that Sprint can simultaneously deliver around 670 Mbit/s per user to 4 users in the cell, according to the video. However, the spectral efficiency per cell is “only” 22.5 bit/s/Hz, which can be compared to the 33 bit/s/Hz that was achieved in real-world trials by Optus and Huawei in 2017.

Both numbers are far from the world record in spectral efficiency of 145.6 bit/s/Hz that was achieved in a lab environment in Bristol, in a collaboration between the universities in Bristol and Lund. Although we cannot expect to reach those numbers in real-world urban deployments, I believe we can reach higher numbers by building 64-antenna arrays with a different form factor: long linear arrays instead of compact square panels. Since most users are separable in terms of having different azimuth angles to the base station, it will be easier to separate them by sending “narrower” beams in the horizontal domain.

Record 5G capacity via software upgrade!

In the news: Nokia delivers record 5G capacity gains through a software upgrade.   No surprise!  We expected, years ago, this would happen.

What does this software upgrade consist of?  I can only speculate.  It is, in all likelihood, more than the usual (and endless) operating system bugfixes we habitually think of as “software upgrades”.   Could it be even something that goes to the core of what massive MIMO is?  Replacing eigen-beamforming with true reciprocity-based beamforming?! Who knows. Replacing maximum-ratio processing with zero-forcing combining?!  Or even more mind-boggling, implementing more sophisticated processing of the sort that has been stuffing the academic journals in the last years? We don’t know!  But it will certainly be interesting to find out at some point, and it seems safe to assume that this race will continue.  

A lot of improvement could be achieved over the baseline canonical massive MIMO processing. One could, for example, exploit fading correlation, develop improved power control algorithms or implement algorithms that learn the propagation environment, autonomously adapt, and predict the channels.  

It might seem that research already squeezed every drop out of the physical layer, but I do not think so.  Huge gains likely remain to be harvested when resources are tight, and especially we are limited by coherence: high carriers means short coherence, and high mobility might mean almost no coherence at all.  When the system is starved of coherence, then even winning a couple of samples on the pilot channel means a lot.  Room for new elegant theory in “closed form”?  Good question. Could sound heartbreaking, but maybe we have to give up on that.  Room for useful algorithms and innovation? Certainly yes.  A lot.  The race will continue.